[{"data":1,"prerenderedAt":2547},["ShallowReactive",2],{"/en-us/blog/tags/kubernetes/":3,"navigation-en-us":19,"banner-en-us":439,"footer-en-us":456,"kubernetes-tag-page-en-us":666},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"content":8,"config":10,"_id":12,"_type":13,"title":14,"_source":15,"_file":16,"_stem":17,"_extension":18},"/en-us/blog/tags/kubernetes","tags",false,"",{"tag":9,"tagSlug":9},"kubernetes",{"template":11},"BlogTag","content:en-us:blog:tags:kubernetes.yml","yaml","Kubernetes","content","en-us/blog/tags/kubernetes.yml","en-us/blog/tags/kubernetes","yml",{"_path":20,"_dir":21,"_draft":6,"_partial":6,"_locale":7,"data":22,"_id":435,"_type":13,"title":436,"_source":15,"_file":437,"_stem":438,"_extension":18},"/shared/en-us/main-navigation","en-us",{"logo":23,"freeTrial":28,"sales":33,"login":38,"items":43,"search":376,"minimal":407,"duo":426},{"config":24},{"href":25,"dataGaName":26,"dataGaLocation":27},"/","gitlab logo","header",{"text":29,"config":30},"Get free trial",{"href":31,"dataGaName":32,"dataGaLocation":27},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":34,"config":35},"Talk to sales",{"href":36,"dataGaName":37,"dataGaLocation":27},"/sales/","sales",{"text":39,"config":40},"Sign in",{"href":41,"dataGaName":42,"dataGaLocation":27},"https://gitlab.com/users/sign_in/","sign in",[44,88,186,191,297,357],{"text":45,"config":46,"cards":48,"footer":71},"Platform",{"dataNavLevelOne":47},"platform",[49,55,63],{"title":45,"description":50,"link":51},"The most comprehensive AI-powered DevSecOps Platform",{"text":52,"config":53},"Explore our Platform",{"href":54,"dataGaName":47,"dataGaLocation":27},"/platform/",{"title":56,"description":57,"link":58},"GitLab Duo (AI)","Build software faster with AI at every stage of development",{"text":59,"config":60},"Meet GitLab Duo",{"href":61,"dataGaName":62,"dataGaLocation":27},"/gitlab-duo/","gitlab duo ai",{"title":64,"description":65,"link":66},"Why GitLab","10 reasons why Enterprises choose GitLab",{"text":67,"config":68},"Learn more",{"href":69,"dataGaName":70,"dataGaLocation":27},"/why-gitlab/","why gitlab",{"title":72,"items":73},"Get started with",[74,79,84],{"text":75,"config":76},"Platform Engineering",{"href":77,"dataGaName":78,"dataGaLocation":27},"/solutions/platform-engineering/","platform engineering",{"text":80,"config":81},"Developer Experience",{"href":82,"dataGaName":83,"dataGaLocation":27},"/developer-experience/","Developer experience",{"text":85,"config":86},"MLOps",{"href":87,"dataGaName":85,"dataGaLocation":27},"/topics/devops/the-role-of-ai-in-devops/",{"text":89,"left":90,"config":91,"link":93,"lists":97,"footer":168},"Product",true,{"dataNavLevelOne":92},"solutions",{"text":94,"config":95},"View all Solutions",{"href":96,"dataGaName":92,"dataGaLocation":27},"/solutions/",[98,123,147],{"title":99,"description":100,"link":101,"items":106},"Automation","CI/CD and automation to accelerate deployment",{"config":102},{"icon":103,"href":104,"dataGaName":105,"dataGaLocation":27},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[107,111,115,119],{"text":108,"config":109},"CI/CD",{"href":110,"dataGaLocation":27,"dataGaName":108},"/solutions/continuous-integration/",{"text":112,"config":113},"AI-Assisted Development",{"href":61,"dataGaLocation":27,"dataGaName":114},"AI assisted development",{"text":116,"config":117},"Source Code Management",{"href":118,"dataGaLocation":27,"dataGaName":116},"/solutions/source-code-management/",{"text":120,"config":121},"Automated Software Delivery",{"href":104,"dataGaLocation":27,"dataGaName":122},"Automated software delivery",{"title":124,"description":125,"link":126,"items":131},"Security","Deliver code faster without compromising security",{"config":127},{"href":128,"dataGaName":129,"dataGaLocation":27,"icon":130},"/solutions/security-compliance/","security and compliance","ShieldCheckLight",[132,137,142],{"text":133,"config":134},"Application Security Testing",{"href":135,"dataGaName":136,"dataGaLocation":27},"/solutions/application-security-testing/","Application security testing",{"text":138,"config":139},"Software Supply Chain Security",{"href":140,"dataGaLocation":27,"dataGaName":141},"/solutions/supply-chain/","Software supply chain security",{"text":143,"config":144},"Software Compliance",{"href":145,"dataGaName":146,"dataGaLocation":27},"/solutions/software-compliance/","software compliance",{"title":148,"link":149,"items":154},"Measurement",{"config":150},{"icon":151,"href":152,"dataGaName":153,"dataGaLocation":27},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[155,159,163],{"text":156,"config":157},"Visibility & Measurement",{"href":152,"dataGaLocation":27,"dataGaName":158},"Visibility and Measurement",{"text":160,"config":161},"Value Stream Management",{"href":162,"dataGaLocation":27,"dataGaName":160},"/solutions/value-stream-management/",{"text":164,"config":165},"Analytics & Insights",{"href":166,"dataGaLocation":27,"dataGaName":167},"/solutions/analytics-and-insights/","Analytics and insights",{"title":169,"items":170},"GitLab for",[171,176,181],{"text":172,"config":173},"Enterprise",{"href":174,"dataGaLocation":27,"dataGaName":175},"/enterprise/","enterprise",{"text":177,"config":178},"Small Business",{"href":179,"dataGaLocation":27,"dataGaName":180},"/small-business/","small business",{"text":182,"config":183},"Public Sector",{"href":184,"dataGaLocation":27,"dataGaName":185},"/solutions/public-sector/","public sector",{"text":187,"config":188},"Pricing",{"href":189,"dataGaName":190,"dataGaLocation":27,"dataNavLevelOne":190},"/pricing/","pricing",{"text":192,"config":193,"link":195,"lists":199,"feature":284},"Resources",{"dataNavLevelOne":194},"resources",{"text":196,"config":197},"View all resources",{"href":198,"dataGaName":194,"dataGaLocation":27},"/resources/",[200,233,256],{"title":201,"items":202},"Getting started",[203,208,213,218,223,228],{"text":204,"config":205},"Install",{"href":206,"dataGaName":207,"dataGaLocation":27},"/install/","install",{"text":209,"config":210},"Quick start guides",{"href":211,"dataGaName":212,"dataGaLocation":27},"/get-started/","quick setup checklists",{"text":214,"config":215},"Learn",{"href":216,"dataGaLocation":27,"dataGaName":217},"https://university.gitlab.com/","learn",{"text":219,"config":220},"Product documentation",{"href":221,"dataGaName":222,"dataGaLocation":27},"https://docs.gitlab.com/","product documentation",{"text":224,"config":225},"Best practice videos",{"href":226,"dataGaName":227,"dataGaLocation":27},"/getting-started-videos/","best practice videos",{"text":229,"config":230},"Integrations",{"href":231,"dataGaName":232,"dataGaLocation":27},"/integrations/","integrations",{"title":234,"items":235},"Discover",[236,241,246,251],{"text":237,"config":238},"Customer success stories",{"href":239,"dataGaName":240,"dataGaLocation":27},"/customers/","customer success stories",{"text":242,"config":243},"Blog",{"href":244,"dataGaName":245,"dataGaLocation":27},"/blog/","blog",{"text":247,"config":248},"Remote",{"href":249,"dataGaName":250,"dataGaLocation":27},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"text":252,"config":253},"TeamOps",{"href":254,"dataGaName":255,"dataGaLocation":27},"/teamops/","teamops",{"title":257,"items":258},"Connect",[259,264,269,274,279],{"text":260,"config":261},"GitLab Services",{"href":262,"dataGaName":263,"dataGaLocation":27},"/services/","services",{"text":265,"config":266},"Community",{"href":267,"dataGaName":268,"dataGaLocation":27},"/community/","community",{"text":270,"config":271},"Forum",{"href":272,"dataGaName":273,"dataGaLocation":27},"https://forum.gitlab.com/","forum",{"text":275,"config":276},"Events",{"href":277,"dataGaName":278,"dataGaLocation":27},"/events/","events",{"text":280,"config":281},"Partners",{"href":282,"dataGaName":283,"dataGaLocation":27},"/partners/","partners",{"backgroundColor":285,"textColor":286,"text":287,"image":288,"link":292},"#2f2a6b","#fff","Insights for the future of software development",{"altText":289,"config":290},"the source promo card",{"src":291},"/images/navigation/the-source-promo-card.svg",{"text":293,"config":294},"Read the latest",{"href":295,"dataGaName":296,"dataGaLocation":27},"/the-source/","the source",{"text":298,"config":299,"lists":301},"Company",{"dataNavLevelOne":300},"company",[302],{"items":303},[304,309,315,317,322,327,332,337,342,347,352],{"text":305,"config":306},"About",{"href":307,"dataGaName":308,"dataGaLocation":27},"/company/","about",{"text":310,"config":311,"footerGa":314},"Jobs",{"href":312,"dataGaName":313,"dataGaLocation":27},"/jobs/","jobs",{"dataGaName":313},{"text":275,"config":316},{"href":277,"dataGaName":278,"dataGaLocation":27},{"text":318,"config":319},"Leadership",{"href":320,"dataGaName":321,"dataGaLocation":27},"/company/team/e-group/","leadership",{"text":323,"config":324},"Team",{"href":325,"dataGaName":326,"dataGaLocation":27},"/company/team/","team",{"text":328,"config":329},"Handbook",{"href":330,"dataGaName":331,"dataGaLocation":27},"https://handbook.gitlab.com/","handbook",{"text":333,"config":334},"Investor relations",{"href":335,"dataGaName":336,"dataGaLocation":27},"https://ir.gitlab.com/","investor relations",{"text":338,"config":339},"Trust Center",{"href":340,"dataGaName":341,"dataGaLocation":27},"/security/","trust center",{"text":343,"config":344},"AI Transparency Center",{"href":345,"dataGaName":346,"dataGaLocation":27},"/ai-transparency-center/","ai transparency center",{"text":348,"config":349},"Newsletter",{"href":350,"dataGaName":351,"dataGaLocation":27},"/company/contact/","newsletter",{"text":353,"config":354},"Press",{"href":355,"dataGaName":356,"dataGaLocation":27},"/press/","press",{"text":358,"config":359,"lists":360},"Contact us",{"dataNavLevelOne":300},[361],{"items":362},[363,366,371],{"text":34,"config":364},{"href":36,"dataGaName":365,"dataGaLocation":27},"talk to sales",{"text":367,"config":368},"Get help",{"href":369,"dataGaName":370,"dataGaLocation":27},"/support/","get help",{"text":372,"config":373},"Customer portal",{"href":374,"dataGaName":375,"dataGaLocation":27},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":377,"login":378,"suggestions":385},"Close",{"text":379,"link":380},"To search repositories and projects, login to",{"text":381,"config":382},"gitlab.com",{"href":41,"dataGaName":383,"dataGaLocation":384},"search login","search",{"text":386,"default":387},"Suggestions",[388,390,394,396,400,404],{"text":56,"config":389},{"href":61,"dataGaName":56,"dataGaLocation":384},{"text":391,"config":392},"Code Suggestions (AI)",{"href":393,"dataGaName":391,"dataGaLocation":384},"/solutions/code-suggestions/",{"text":108,"config":395},{"href":110,"dataGaName":108,"dataGaLocation":384},{"text":397,"config":398},"GitLab on AWS",{"href":399,"dataGaName":397,"dataGaLocation":384},"/partners/technology-partners/aws/",{"text":401,"config":402},"GitLab on Google Cloud",{"href":403,"dataGaName":401,"dataGaLocation":384},"/partners/technology-partners/google-cloud-platform/",{"text":405,"config":406},"Why GitLab?",{"href":69,"dataGaName":405,"dataGaLocation":384},{"freeTrial":408,"mobileIcon":413,"desktopIcon":418,"secondaryButton":421},{"text":409,"config":410},"Start free trial",{"href":411,"dataGaName":32,"dataGaLocation":412},"https://gitlab.com/-/trials/new/","nav",{"altText":414,"config":415},"Gitlab Icon",{"src":416,"dataGaName":417,"dataGaLocation":412},"/images/brand/gitlab-logo-tanuki.svg","gitlab icon",{"altText":414,"config":419},{"src":420,"dataGaName":417,"dataGaLocation":412},"/images/brand/gitlab-logo-type.svg",{"text":422,"config":423},"Get Started",{"href":424,"dataGaName":425,"dataGaLocation":412},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/compare/gitlab-vs-github/","get started",{"freeTrial":427,"mobileIcon":431,"desktopIcon":433},{"text":428,"config":429},"Learn more about GitLab Duo",{"href":61,"dataGaName":430,"dataGaLocation":412},"gitlab duo",{"altText":414,"config":432},{"src":416,"dataGaName":417,"dataGaLocation":412},{"altText":414,"config":434},{"src":420,"dataGaName":417,"dataGaLocation":412},"content:shared:en-us:main-navigation.yml","Main Navigation","shared/en-us/main-navigation.yml","shared/en-us/main-navigation",{"_path":440,"_dir":21,"_draft":6,"_partial":6,"_locale":7,"title":441,"button":442,"image":447,"config":451,"_id":453,"_type":13,"_source":15,"_file":454,"_stem":455,"_extension":18},"/shared/en-us/banner","is now in public beta!",{"text":443,"config":444},"Try the Beta",{"href":445,"dataGaName":446,"dataGaLocation":27},"/gitlab-duo/agent-platform/","duo banner",{"altText":448,"config":449},"GitLab Duo Agent Platform",{"src":450},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1753720689/somrf9zaunk0xlt7ne4x.svg",{"layout":452},"release","content:shared:en-us:banner.yml","shared/en-us/banner.yml","shared/en-us/banner",{"_path":457,"_dir":21,"_draft":6,"_partial":6,"_locale":7,"data":458,"_id":662,"_type":13,"title":663,"_source":15,"_file":664,"_stem":665,"_extension":18},"/shared/en-us/main-footer",{"text":459,"source":460,"edit":466,"contribute":471,"config":476,"items":481,"minimal":654},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":461,"config":462},"View page source",{"href":463,"dataGaName":464,"dataGaLocation":465},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":467,"config":468},"Edit this page",{"href":469,"dataGaName":470,"dataGaLocation":465},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":472,"config":473},"Please contribute",{"href":474,"dataGaName":475,"dataGaLocation":465},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":477,"facebook":478,"youtube":479,"linkedin":480},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[482,505,561,590,624],{"title":45,"links":483,"subMenu":488},[484],{"text":485,"config":486},"DevSecOps platform",{"href":54,"dataGaName":487,"dataGaLocation":465},"devsecops platform",[489],{"title":187,"links":490},[491,495,500],{"text":492,"config":493},"View plans",{"href":189,"dataGaName":494,"dataGaLocation":465},"view plans",{"text":496,"config":497},"Why Premium?",{"href":498,"dataGaName":499,"dataGaLocation":465},"/pricing/premium/","why premium",{"text":501,"config":502},"Why Ultimate?",{"href":503,"dataGaName":504,"dataGaLocation":465},"/pricing/ultimate/","why ultimate",{"title":506,"links":507},"Solutions",[508,513,515,517,522,527,531,534,538,543,545,548,551,556],{"text":509,"config":510},"Digital transformation",{"href":511,"dataGaName":512,"dataGaLocation":465},"/topics/digital-transformation/","digital transformation",{"text":133,"config":514},{"href":135,"dataGaName":133,"dataGaLocation":465},{"text":122,"config":516},{"href":104,"dataGaName":105,"dataGaLocation":465},{"text":518,"config":519},"Agile development",{"href":520,"dataGaName":521,"dataGaLocation":465},"/solutions/agile-delivery/","agile delivery",{"text":523,"config":524},"Cloud transformation",{"href":525,"dataGaName":526,"dataGaLocation":465},"/topics/cloud-native/","cloud transformation",{"text":528,"config":529},"SCM",{"href":118,"dataGaName":530,"dataGaLocation":465},"source code management",{"text":108,"config":532},{"href":110,"dataGaName":533,"dataGaLocation":465},"continuous integration & delivery",{"text":535,"config":536},"Value stream management",{"href":162,"dataGaName":537,"dataGaLocation":465},"value stream management",{"text":539,"config":540},"GitOps",{"href":541,"dataGaName":542,"dataGaLocation":465},"/solutions/gitops/","gitops",{"text":172,"config":544},{"href":174,"dataGaName":175,"dataGaLocation":465},{"text":546,"config":547},"Small business",{"href":179,"dataGaName":180,"dataGaLocation":465},{"text":549,"config":550},"Public sector",{"href":184,"dataGaName":185,"dataGaLocation":465},{"text":552,"config":553},"Education",{"href":554,"dataGaName":555,"dataGaLocation":465},"/solutions/education/","education",{"text":557,"config":558},"Financial services",{"href":559,"dataGaName":560,"dataGaLocation":465},"/solutions/finance/","financial services",{"title":192,"links":562},[563,565,567,569,572,574,576,578,580,582,584,586,588],{"text":204,"config":564},{"href":206,"dataGaName":207,"dataGaLocation":465},{"text":209,"config":566},{"href":211,"dataGaName":212,"dataGaLocation":465},{"text":214,"config":568},{"href":216,"dataGaName":217,"dataGaLocation":465},{"text":219,"config":570},{"href":221,"dataGaName":571,"dataGaLocation":465},"docs",{"text":242,"config":573},{"href":244,"dataGaName":245,"dataGaLocation":465},{"text":237,"config":575},{"href":239,"dataGaName":240,"dataGaLocation":465},{"text":247,"config":577},{"href":249,"dataGaName":250,"dataGaLocation":465},{"text":260,"config":579},{"href":262,"dataGaName":263,"dataGaLocation":465},{"text":252,"config":581},{"href":254,"dataGaName":255,"dataGaLocation":465},{"text":265,"config":583},{"href":267,"dataGaName":268,"dataGaLocation":465},{"text":270,"config":585},{"href":272,"dataGaName":273,"dataGaLocation":465},{"text":275,"config":587},{"href":277,"dataGaName":278,"dataGaLocation":465},{"text":280,"config":589},{"href":282,"dataGaName":283,"dataGaLocation":465},{"title":298,"links":591},[592,594,596,598,600,602,604,608,613,615,617,619],{"text":305,"config":593},{"href":307,"dataGaName":300,"dataGaLocation":465},{"text":310,"config":595},{"href":312,"dataGaName":313,"dataGaLocation":465},{"text":318,"config":597},{"href":320,"dataGaName":321,"dataGaLocation":465},{"text":323,"config":599},{"href":325,"dataGaName":326,"dataGaLocation":465},{"text":328,"config":601},{"href":330,"dataGaName":331,"dataGaLocation":465},{"text":333,"config":603},{"href":335,"dataGaName":336,"dataGaLocation":465},{"text":605,"config":606},"Sustainability",{"href":607,"dataGaName":605,"dataGaLocation":465},"/sustainability/",{"text":609,"config":610},"Diversity, inclusion and belonging (DIB)",{"href":611,"dataGaName":612,"dataGaLocation":465},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":338,"config":614},{"href":340,"dataGaName":341,"dataGaLocation":465},{"text":348,"config":616},{"href":350,"dataGaName":351,"dataGaLocation":465},{"text":353,"config":618},{"href":355,"dataGaName":356,"dataGaLocation":465},{"text":620,"config":621},"Modern Slavery Transparency Statement",{"href":622,"dataGaName":623,"dataGaLocation":465},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"title":625,"links":626},"Contact Us",[627,630,632,634,639,644,649],{"text":628,"config":629},"Contact an expert",{"href":36,"dataGaName":37,"dataGaLocation":465},{"text":367,"config":631},{"href":369,"dataGaName":370,"dataGaLocation":465},{"text":372,"config":633},{"href":374,"dataGaName":375,"dataGaLocation":465},{"text":635,"config":636},"Status",{"href":637,"dataGaName":638,"dataGaLocation":465},"https://status.gitlab.com/","status",{"text":640,"config":641},"Terms of use",{"href":642,"dataGaName":643,"dataGaLocation":465},"/terms/","terms of use",{"text":645,"config":646},"Privacy statement",{"href":647,"dataGaName":648,"dataGaLocation":465},"/privacy/","privacy statement",{"text":650,"config":651},"Cookie preferences",{"dataGaName":652,"dataGaLocation":465,"id":653,"isOneTrustButton":90},"cookie preferences","ot-sdk-btn",{"items":655},[656,658,660],{"text":640,"config":657},{"href":642,"dataGaName":643,"dataGaLocation":465},{"text":645,"config":659},{"href":647,"dataGaName":648,"dataGaLocation":465},{"text":650,"config":661},{"dataGaName":652,"dataGaLocation":465,"id":653,"isOneTrustButton":90},"content:shared:en-us:main-footer.yml","Main Footer","shared/en-us/main-footer.yml","shared/en-us/main-footer",{"allPosts":667,"featuredPost":2526,"totalPagesCount":2545,"initialPosts":2546},[668,693,714,734,755,777,798,822,842,862,886,909,930,950,969,991,1012,1033,1052,1072,1092,1113,1135,1157,1176,1197,1216,1235,1255,1275,1296,1314,1333,1354,1372,1391,1409,1430,1449,1467,1485,1504,1522,1541,1560,1578,1597,1616,1636,1657,1675,1695,1714,1734,1755,1775,1794,1813,1832,1852,1872,1891,1909,1927,1946,1965,1984,2003,2022,2040,2059,2079,2097,2118,2139,2159,2177,2198,2219,2237,2257,2277,2296,2317,2335,2354,2373,2394,2413,2432,2451,2470,2488,2507],{"_path":669,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":670,"content":678,"config":686,"_id":689,"_type":13,"title":690,"_source":15,"_file":691,"_stem":692,"_extension":18},"/en-us/blog/beginner-guide-ci-cd",{"title":671,"description":672,"ogTitle":671,"ogDescription":672,"noIndex":6,"ogImage":673,"ogUrl":674,"ogSiteName":675,"ogType":676,"canonicalUrls":674,"schema":677},"GitLab’s guide to CI/CD for beginners","CI/CD is a key part of the DevOps journey. Here’s everything you need to understand about this game-changing process.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681391/Blog/Hero%20Images/beginnercicd.jpg","https://about.gitlab.com/blog/beginner-guide-ci-cd","https://about.gitlab.com","article","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab’s guide to CI/CD for beginners\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Valerie Silverthorne\"}],\n        \"datePublished\": \"2020-07-06\",\n      }",{"title":671,"description":672,"authors":679,"heroImage":673,"date":681,"body":682,"category":683,"tags":684},[680],"Valerie Silverthorne","2020-07-06","\n\nContinuous integration and [continuous delivery/deployment](/topics/continuous-delivery/) (most often referred to as CI/CD) are the cornerstones of [DevOps](/topics/devops/) and any modern software development practice. Here’s everything you need to know about [CI/CD for beginners](/blog/how-to-keep-up-with-ci-cd-best-practices/).\n\n## What CI/CD means\n\nIf your software development process involves a lot of stopping, starting and handoffs, [CI/CD](/topics/ci-cd/) may be just what you’re looking for. A CI/CD pipeline is a seamless way for developers to make changes to code that are then automatically tested and pushed out for delivery and deployment. The goal is to eliminate downtime. Get CI/CD right and you’re well on the road to successful DevOps and dramatically faster code release. In our [2020 Global DevSecOps Survey](/blog/devsecops-survey-released/), nearly 83% of survey takers said they’re getting code out the door more quickly thanks to DevOps.\n\n## Understand CI/CD basics\n\nIf you’re not sure what a pipeline is, or how the entire process works, here’s a [detailed explanation](/blog/a-beginners-guide-to-continuous-integration/) of how all the moving parts work together to make software development quicker and easier.\n\n## Four benefits of CI/CD\n\nYes, CI/CD helps speed up delivery of code but it also makes for happier software developers. At a time when there continues to be [a worldwide shortage of software developers](https://www.gartner.com/en/newsroom/press-releases/2019-01-17-gartner-survey-shows-global-talent-shortage-is-now-the-top-emerging-risk-facing-organizations), it’s critical to retain technical talent. Developer job satisfaction is just one of [four key benefits](/blog/positive-outcomes-ci-cd/) that come from implementing a CI/CD process.\n\n## How to pick the right CI/CD tool\n\nNow that you’re sold on the [benefits of CI/CD](/topics/ci-cd/benefits-continuous-integration/) it’s time to choose a tool. There are a number of considerations, from [budget to room for growth](/topics/ci-cd/choose-continuous-integration-tool/) so it’s worth taking the time to think it through.\n\n## How to make the business case for CI/CD\n\nTo tie a CI/CD process to ROI isn’t difficult, but it’s an important step to take to get management buy-in. Here are [three factors to consider](/blog/modernize-your-ci-cd/) – including the hidden cost of toolchain sprawl – as you make the case for CI/CD.\n\n## Take 20 minutes and build a CI/CD pipeline\n\nOk, enough talking about theoreticals... it’s time to do something. Using GitLab’s [Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/) functionality, you can [move from code to production](/blog/building-a-cicd-pipeline-in-20-mins/) in just two simple steps and in only 20 minutes (no, really, just 20 minutes).\n\n## Next stop: Kubernetes!\n\nFinally, you can tie your GitLab CI pipeline into Google Kubernetes Engine (GKE) and as a bonus it takes only 15 minutes. Our [step-by-step tutorial](/blog/gitlab-ci-on-google-kubernetes-engine/) is completely beginner-friendly.\n\n**Level up your CI/CD knowledge:**\n\n[How CI can put the \"Sec\" in DevSecOps](/blog/solve-devsecops-challenges-with-gitlab-ci-cd/)\n\n[Autoscale GitLab CI with AWS Fargate](/blog/introducing-autoscaling-gitlab-runners-on-aws-fargate/)\n\n[Get started with parent-child pipelines](/blog/parent-child-pipelines/)\n\nCover image by [Kyle Glenn](https://unsplash.com/@kylejglenn) on [Unsplash](https://www.unsplash.com)\n{: .note}\n","engineering",[108,685,9],"DevOps",{"slug":687,"featured":6,"template":688},"beginner-guide-ci-cd","BlogPost","content:en-us:blog:beginner-guide-ci-cd.yml","Beginner Guide Ci Cd","en-us/blog/beginner-guide-ci-cd.yml","en-us/blog/beginner-guide-ci-cd",{"_path":694,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":695,"content":701,"config":708,"_id":710,"_type":13,"title":711,"_source":15,"_file":712,"_stem":713,"_extension":18},"/en-us/blog/best-practices-for-kubernetes-runners",{"title":696,"description":697,"ogTitle":696,"ogDescription":697,"noIndex":6,"ogImage":698,"ogUrl":699,"ogSiteName":675,"ogType":676,"canonicalUrls":699,"schema":700},"Best practices to keep your Kubernetes runners moving","In a presentation at GitLab Commit San Francisco, a senior software engineer from F5 Networks shares some best practices for working with Kubernetes runners.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681341/Blog/Hero%20Images/trackandfield.jpg","https://about.gitlab.com/blog/best-practices-for-kubernetes-runners","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Best practices to keep your Kubernetes runners moving\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Sara Kassabian\"}],\n        \"datePublished\": \"2020-05-27\",\n      }",{"title":696,"description":697,"authors":702,"heroImage":698,"date":704,"body":705,"category":683,"tags":706},[703],"Sara Kassabian","2020-05-27","Sometimes in software engineering, you have to learn the hard way. GitLab CI\nis extremely powerful and flexible, but it’s also easy to make mistakes that\ncould take out a GitLab runner, which can clog up Sidekiq and bring down\nyour entire GitLab instance.\n\n\nLuckily, Sean Smith, senior software engineer for F5 Networks has been\nthrough it, and summarizes some of their learnings in [his talk at GitLab\nCommit San Francisco](https://www.youtube.com/watch?v=Hks5ElUxkP4). In the\npresentation, Sean goes in-depth about a past incident that clogged up F5\nNetwork's GitLab runner, and shares tips on setting limits for Kubernetes\n(K8s) runners.\n\n\nSean is a GitLab administrator for [F5 Networks](https://www.f5.com/), a\ncompany with about 1,800 users worldwide running 7,500 projects each month –\nexcluding forks. That’s roughly 350,000 - 400,000 CI jobs going through the\nK8s runners each month. Until some recent hires, there were only three\nengineers to handle it all.\n\n\nInstead of running a giant GitLab instance on one VM, F5 broke up their\ninstance into seven different servers: Two HA web servers, one PostGres\nserver, PostGres replica, Sidekiq, Gitaly (our Git filesystem), and Redis.\n\n\n## Keep your GitLab runners up and moving\n\n\nF5 uses two types of GitLab runners:\n\n\n*   Kubernetes: About 90% of F5 jobs go through K8s\n\n*   Docker: Docker machine is run on-prem and in the cloud\n\n\n**Why use Docker?** F5 uses Docker to configure cluster networks in\ndifferent jobs as well as for unit testing. Since the Docker machine can run\non-prem and also in the cloud, it’s easy to have a VM dedicated to the job\nthat allows you to manage those Docker images and Docker containers and set\nup your cluster networking topology within Docker, so you can run your tests\nand tear it down afterward without affecting other users. This isn’t\nsomething that is really possible in Kubernetes runners.\n\n\nOtherwise, F5 Networks uses Kubernetes, but keeping your K8s up and running\nisn’t necessarily foolproof.\n\n\n### CI jobs can spawn\n\n\nSometimes, a seemingly benign coding error can create unanticipated\nconsequences for your Kubernetes runners.\n\n\nOne time, an F5 Engineer decided to use a GitLab CI job to automatically\nconfigure different settings on various jobs and projects. It made sense to\nconfigure using GitLab CI because the engineer wanted to be able to use [Git\nfor version control](/topics/version-control/). Version control makes it\neasier for the team to iterate on the code transparently. He wrote the code\nto run the job.\n\n\nBut, he didn’t read the fine print in the library he was using. The code he\nwrote looked for the project ID, and if it found the project ID, runs the\npipeline once per hour at the 30-minute mark. The assumption was that if\nthere was already a matching scheduled task, the create function would not\ncreate a duplicate. Unfortunately, this was not the case. The code he ran\ncaused the number of CI jobs to grow exponentially.\n\n\n![The code that clogged the K8s runner with GitLab CI jobs for F5\nNetworks](https://about.gitlab.com/images/blogimages/problemcode.png){:\n.shadow}\n\nThe code that clogged the K8s runner with GitLab CI jobs for F5 Networks.\nCan you see the problem yet?\n\n{: .note.text-center}\n\n\n\"You schedule a job, then next you schedule another job so now you've got\ntwo jobs scheduled, and then you've got four jobs scheduled, and then eight,\nafter 10 iterations, you get around the 1,024 jobs scheduled and after\n1,532,000 jobs, if this was allowed to run for 24 hours, you would end up\nwith 16.7 million jobs being scheduled by the 24th hour,\" says Sean.\n\n\nIn short: Chaos. Remember, F5 Networks has a CI pipeline capacity of 350,000\nto 400,000 jobs per month, so 16.7 million jobs in 24 hours could easily\nclog the system, taking down the K8s nodes, as well as GitLab nodes.\n\n\nLuckily, there’s a simple enough fix. First, identify which project is\ncausing the problem, and disable CI on the project so it can’t create any\nnew jobs. Next, kill all the pending jobs by [running this\nsnippet](https://gitlab.com/snippets/1924269).\n\n\n```\n\n# gitlab-rails console\n\np = Project.find_by_full_path(‘rogue-group/rogue-project’)\n\nCi::Pipeline.where(project_id: p.id).where(status: ‘pending’).each {|p|\np.cancel}\n\nexit\n\n```\n\n\nIt’s really a judgment call whether to kill a running job or not. If a job\nis currently running and is going to take all of 30 seconds then maybe don’t\nbother killing it, but if the job is going to take 30 minutes then consider\nkilling it to free up resources for your users.\n\n\nF5 learned a lesson here and set up a monitoring alert to help ensure the\njob queue doesn’t back up like that again. The Cron job checks to make sure\nF5 is not exceeding a preestablished threshold on the number of jobs in a\npending state. The alert links to a dashboard and also includes the full\nplaybook for how to resolve the problem (because let’s face it, nobody is at\ntheir best when troubleshooting bleary-eyed at 3 a.m.). At first there were\nsome false positives, but now the alerting has been fine-tuned and the\nsystem saved F5 from two outages so far.\n\n\n### Push it to the limit\n\n\nThe fact is, nobody has an unlimited cloud budget, and even if you're\non-prem, resources are even more constrained for users that rely upon\nhardware. Sean says that F5 soon realized that, to meet the needs of all\nusers, sensible limits had to be established so one or two mega-users didn't\ndevour all their resources. He has some tips on how to set limits in your\nKubernetes and GitLab runners.\n\n\nWhile some users may be disgruntled that cloud limits exist and are\nenforced, the best method is to keep an open dialogue with users about the\nlimits while recognizing that projects expand and grow over a period of\ntime.\n\n\nFortunately you can set the limits yourself and don’t have to rely on the\ngoodwill of your users to conserve CPU. Kubernetes allows limits by default,\nand GitLab supports K8s request and limits. The K8s scheduler uses requests\nto determine which nodes to run the workload on. Limits will kill a job if\nthe job exceeds the predefined limit – there can be different requests and\nlimits but if requests aren’t specified and limits are, the scheduler will\nuse the limits to determine the request value.\n\n\n[Take a peek at what F5 configured the limits for their Kubernetes GitLab\nrunner](https://gitlab.com/snippets/1926912).\n\n\n```ruby\n\nconcurrent = 200\n\nlog_format = \"json\"\n\n[[runners]]\n  name = \"Kubernetes Gitlab Runner\"\n  url = \"https://gitlab.example.com/ci\"\n  token = \"insert token here\"\n  executor = \"kubernetes\"\n  [runners.kubernetes]\n    namespace = \"gitlab-runner\"\n    service-account = \"gitlab-runner-user\"\n    pull_policy = \"always\"\n\n    # build container\n    cpu_limit = \"2\"\n    memory_limit = \"6Gi\"\n\n    # service containers\n    service_cpu_limit = \"1\"\n    service_memory_limit = \"1Gi\"\n\n    # helper container\n    helper_cpu_limit = \"1\"\n    helper_memory_limit = \"1Gi\"\n```\n\n\n\"We have got currency of 200 jobs, so it will at max spawn 200 jobs and\nyou'll see that we are limiting the CPU use on the build container to two\nand memory to six gigabytes, and on the helper and service CPU and memory\nlimits, we have one CPU and one gig of memory each,\" says Sean. \"And so it\ngives you that flexibility to break it out because generally, you don't\nnecessarily need as much CPU or as much memory on a service that you're\nspending up in your CI job.\"\n\n\n## What comes first: Setting up Kubernetes runners or establishing limits?\n\n\n[DevOps](/topics/devops/) is a data-driven practice, so the idea of setting\nlimits to conserve resources without any underlying data about what users\nare doing can seem counterintuitive. If you’re migrating to Kubernetes\nrunners from a Docker runner or a shell runner, it’s easy enough to\nextrapolate the numbers to establish limits as you set up your Kuberntes\nrunners.\n\n\nIf you’re brand-new to GitLab and GitLab CI, then it’s kind of a shot in the\ndark. Think about your bills and resource constraints: How much memory and\nCPU is available? Is anything else running on your K8s cluster. Chances are,\nyour guesses will be incorrect – but that’s OK.\n\n\nIt might sound obvious, but if you’re running a hosted application on the\nsame K8s cluster as your GitLab CI jobs, don’t set limits based on the\ncapacity of a full K8s cluster. Ideally, you’d have a separate K8s cluster\nfor GitLab CI jobs, but that isn’t always possible.\n\n\n### How F5 Networks did it\n\n\nF5 Networks started with a small team of roughly 50 people and maybe 100\nprojects in GitLab – so setting a limit on K8s wasn’t a major concern until\nthe company and, as a result, projects, started to grow.\n\n\nOnce it came time to set limits to their preexisting K8s runners, the first\nstep was to enable the K8s metric server to monitor how their users consume\nresources. The next step was to determine what users are doing. Sean\nrecommends using a tool like Grafana or Prometheus, which has a native\nintegration within GitLab (although, F5 used a tool called K9), to extract\nthe data from the K8s metric server and display it on some sort of dashboard\nusing Grafana or Prometheus.\n\n\n## Some more tips for Kubernetes runners\n\n\n### Cutting them off: Enforcing limits\n\n\nOnce a user hits their limit, most of the time the end result is their job\ngets killed. Usually the user will notice a mistake, go in, and fix their\ncode, but most likely they will just ask for more resources.\n\n\nThe best way to determine whether or not to allocate more of your finite\nresources to a user is to determine need, Sean explains. Ask the user to\nreturn to you with concrete numbers about the amount of RAM or CPU they\nrequire. But if you don’t have the resources, then don’t overextend\nyourselves to the detriment of your other users.\n\n\n### Use labels to reveal more data\n\n\n[Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)\nmake it easier to identify workloads in Kubernetes, and can be expanded to\nenvironmental variables within GitLab, for example, job = \"$CI_JOB_ID\" and\nproject = \"$CI_PROJECT_ID\". Labels can be used by admins who are manually\ndoing Quebectal commands against K8s or they can be used in reporting tools\nlike Prometheus or Grafana for setting limits. But labels are the most\nvaluable when it comes to debugging purposes.\n\n\nBear in mind, labels are finicky in Kubernetes. [There are certain\ncharacters (stay away from \"?\") that can cause jobs to\nfail](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4565). There is a\n63 character limit on labels. If there is an unsupported character or the\nlabel is too long, the job won’t start. There won’t be a really good\nindication as to why your job wouldn’t start either, which can be a pain for\ntroubleshooting. [Bookmark this page to learn more about labels in\nKubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)\n(including its limitations).\n\n\nGitLab users that run on K8s need to be cautious not to overburden the\nrunner with GitLab CI jobs, and ought to consider setting limits on CPU to\nconserve valuable resources.\n\n\nWant to learn more about how F5 manages their Kubernetes runners on their\nGitLab instance? Watch Sean's presentation at GitLab Commit San Francisco in\nthe video below.\n\n\n\u003C!-- blank line -->\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/Hks5ElUxkP4\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\u003C!-- blank line -->\n\n\n## Learn more\n\n\n* [Read on](/solutions/kubernetes/) to learn more about how GitLab and\nKubernetes work together, and explore our plans for future integration with\nKubernetes.\n\n\n* Explore the official documentation on [Kubernetes\nexecutor](https://docs.gitlab.com/runner/executors/kubernetes.html), which\ncovers everything from choosing options in your configuration file to giving\nGitLab Runner access to the Kubernetes API, environment variables, volumes,\nhelper containers, security context, privileged mode, secret volume, and\nremoving old runner pods.\n\n\nCover Photo by [Kolleen\nGladden](https://unsplash.com/@rockthechaos?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\non\n[Unsplash](https://unsplash.com/s/photos/track-and-field?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\n\n{: .note.text-center}\n",[9,108,707],"user stories",{"slug":709,"featured":6,"template":688},"best-practices-for-kubernetes-runners","content:en-us:blog:best-practices-for-kubernetes-runners.yml","Best Practices For Kubernetes Runners","en-us/blog/best-practices-for-kubernetes-runners.yml","en-us/blog/best-practices-for-kubernetes-runners",{"_path":715,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":716,"content":722,"config":728,"_id":730,"_type":13,"title":731,"_source":15,"_file":732,"_stem":733,"_extension":18},"/en-us/blog/building-a-cicd-pipeline-in-20-mins",{"title":717,"description":718,"ogTitle":717,"ogDescription":718,"noIndex":6,"ogImage":719,"ogUrl":720,"ogSiteName":675,"ogType":676,"canonicalUrls":720,"schema":721},"How to build a CI/CD pipeline in 20 minutes or less","Deploying your pipeline to Kubernetes is just a 'git push' away using GitLab's Auto DevOps feature.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749666903/Blog/Hero%20Images/pipeline.jpg","https://about.gitlab.com/blog/building-a-cicd-pipeline-in-20-mins","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to build a CI/CD pipeline in 20 minutes or less\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Sara Kassabian\"}],\n        \"datePublished\": \"2019-09-26\",\n      }",{"title":717,"description":718,"authors":723,"heroImage":719,"date":724,"body":725,"category":683,"tags":726},[703],"2019-09-26","\nIn software development, time really is money. GitLab users know that by using our [Auto DevOps functionality](https://docs.gitlab.com/ee/topics/autodevops/), you can move from code to production in just two simple steps.\n\n[Eddie Zaneski](https://gitlab.com/eddiezane) of Digital Ocean joined us in Brooklyn at [GitLab Commit, our inaugural user conference](/blog/wrapping-up-commit/). In an informative and light-hearted talk, Eddie demonstrated how to build and deploy a [CI/CD pipeline](/topics/ci-cd/) to a Kubernetes cluster from scratch or by using GitLab’s [Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/) tooling in less than 20 minutes.\n\nIn the demo, Eddie and his co-founder were really wingin’ it when building an app for the “startup” he used for this demo, the Screaming Chicken Club.\n\n{::options parse_block_html=\"false\" /}\n\n\u003Cdiv class=\"center\">\n\n\u003Cblockquote class=\"twitter-tweet\">\u003Cp lang=\"en\" dir=\"ltr\">Massive shoutout to \u003Ca href=\"https://twitter.com/kamaln7?ref_src=twsrc%5Etfw\">@kamaln7\u003C/a> for building \u003Ca href=\"https://t.co/kke5hc2FC8\">https://t.co/kke5hc2FC8\u003C/a> and lending it to me for \u003Ca href=\"https://twitter.com/hashtag/GitLabCommit?src=hash&amp;ref_src=twsrc%5Etfw\">#GitLabCommit\u003C/a>\u003C/p>&mdash; Eddie Zaneski (@eddiezane) \u003Ca href=\"https://twitter.com/eddiezane/status/1174044146002288640?ref_src=twsrc%5Etfw\">September 17, 2019\u003C/a>\u003C/blockquote> \u003Cscript async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\">\u003C/script>\n\n\u003C/div>\n\n“I'm trying to raise money right now and VCs are caring about my tech,” said Eddie of his hypothetical start-up. “An easy way to score credit with VCs by having a super secure and well-thought-out DevOps pipeline, and that's where GitLab really comes into play here.”\n\n[Auto DevOps](/topics/devops/) is an out-of-the-box solution that helps move your code into production faster by automating the complex components of building a CI/CD pipeline, such as: “Building your application into a container; checking it for vulnerabilities; checking it for dependencies, checking it for licenses; deploying that to a Kubernetes cluster; setting up host names; DNS, TLS certs; automatically renewing them for you and doing performance testing.”\n\nSo where do you start?\n\n## Spin up your Kubernetes cluster\n\nGitLab has an airtight integration with Kubernetes that makes it possible to [deploy software from GitLab’s CI/CD pipeline to Kubernetes](/solutions/kubernetes/) by using Auto DevOps or by building the pipeline yourself. Either way, the first step will be to [configure a new Kubernetes cluster to deploy your application](https://docs.gitlab.com/ee/user/project/clusters/index.html).\n\nIt’s really as simple as toggling to the lefthand sidebar on GitLab and clicking Kubernetes > Operations > Add a Cluster. This process works for [GCP or GKE users](https://docs.gitlab.com/ee/user/project/clusters/index.html#add-new-gke-cluster), as well as those that are not on Google Cloud or are using an on-prem solution. In the demo, Eddie used Digital Ocean’s managed Kubernetes service to create the cluster, select the data center, and pick the size of the node. Eddie estimated this process would take anywhere from three to five minutes.\n\nThe next step is to integrate the Kubernetes cluster into the project, which requires a number of manual tasks, including grabbing the URL for the Kubernetes API server, creating a service account and binding it to the cluster admin, and grabbing the service token that’s generated. In the spirit of innovative shortcuts, Eddie created a [kubectl plugin](https://gitlab.com/eddiezane/kubectl-gitlab_bootstrap) that makes it even easier to add the Kubernetes cluster to the associated GitLab project.\n\n“This is actually going to automatically bootstrap a Kubernetes cluster into your GitLab project, create all the service accounts, make all the GitLab API requests, and take care of everything under the hood.” Thanks, Eddie!\n\nNext, just grab the GitLab project ID, and run:\n\n`kubectl gitlab-bootstrap gitlab-project-id`\n\nThe result is a URL. Follow the URL to see more about the Kubernetes cluster in your GitLab project.\n\n## GitLab-managed applications make your life easier\n\nOnce you’re there, you’ll see a list of [GitLab-managed applications](https://docs.gitlab.com/ee/topics/autodevops/cloud_deployments/auto_devops_with_gke.html). These apps can be installed in just one click to help manage your new Kubernetes cluster.\n\n1. [Helm](https://docs.gitlab.com/ee/update/removals.html): Install Helm first, because it is the package manager for Kubernetes and is required to install the other applications.\n2. [Ingress](https://docs.gitlab.com/ee/update/removals.html): Once Helm is installed, you can install the [Ingress controller](https://docs.gitlab.com/ee/update/removals.html), which will handle all the routing and mapping within the cluster and will create a load balancer behind the scenes. **Copy the IP address that’s displayed; you’ll need it later.**\n3. [Prometheus](https://docs.gitlab.com/ee/update/removals.html): An open source tool that monitors your deployed applications.\n4. [Cert-Manager](https://docs.gitlab.com/ee/update/removals.html): This will handle all the certificates and make sure everything is up to date.\n5. [GitLab Runner](https://docs.gitlab.com/ee/update/removals.html): Lets you run your GitLab CI/CD on your own host, or within the Kubernetes cluster.\n\nThe superstar of the bunch is GitLab Runner, the open source project that is used to run your CI/CD jobs and send the results back to GitLab.\n\nChanges include:\n\n## Launch Auto DevOps with the click of a button\n\nOnce you’ve created your Kubernetes cluster and installed the required applications, [launch the Auto DevOps process with the click of a button](https://docs.gitlab.com/ee/topics/autodevops/cloud_deployments/auto_devops_with_gke.html), literally.\n\n![Enable Auto DevOps](https://about.gitlab.com/images/blogimages/guide_enable_autodevops.jpg){: .shadow.medium.center}\n\nBy enabling Auto DevOps and selecting your deployment strategy (here is where you need the Ingress IP address), you kick off the CI/CD pipeline.\n\n## Or launch your own Auto DevOps process\n\nDon’t want to use our out-of-the-box Auto DevOps feature? You don’t have to. The good news is the underlying source code is available to you for each component of the deployment process, making it easy for you to parse out what jobs you'd like to run.\n\n“The great thing about GitLab being open source is nothing is magic, right? All this stuff is source code that we can all go look up and read,” says Eddie.\n\nThe source code for the entire out-of-the-box Auto DevOps process lives in [one YAML file](https://gitlab.com/gitlab-org/gitlab-foss/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml) in the GitLab repository. GitLab users are able to separately run jobs for each stage in the Auto DevOps process, from build to cleanup, simply by copy/pasting the [underlying source code](/solutions/source-code-management/) into a properly configured terminal.\n\nThe individual templates and components for the important jobs in each Auto DevOps stage are included in the YAML file. You can select which components you’d like to use. Note that nothing needs to be imported, because it all comes with your GitLab install.\n\nIn the demo, Eddie ran the jobs for the build and deploy stages as examples.\n\nRemember to return to the load balancer and grab the IP address Ingress created to configure your DNS, `git push`, then, viola! Your CI/CD pipeline is running.\n\n## A peek inside the pipeline\n\nDuring the demo Eddie went behind the scenes to explain what was happening inside the pipelines for the build and deploy jobs he started.\n\n### Build\n\n“It's going to take care of a lot of stuff behind the hood for us,” said Eddie. The pipeline uses Docker to build containers inside Docker, which will log in to our Kubernetes cluster’s container registry.\n\n“So GitLab automatically provides you with a container registry for your project,” said Eddie. “It's going to substitute in a whole bunch of environment variables and handles all the logins and generates the token, and all that. So we don't actually have to think about anything.”\n\nNext, the Docker base image loads. Eddie went into more detail about how to write up the Docker set-up, but the GitLab build component can automatically figure out the type of project you’re running and generates a Docker file with best practices to build the container.\n\n“So my project is building, compiling, pushing up my layers to the container registry, and then my build job should finish real quick and then my deploy job is going to kick off,” explained Eddie.\n\n### Deploy\n\nThe deploy job kicks off by spinning up a Helm chart that automatically fills the required information, such as the container ID, the host name, namespace, etc., into the template. Then it will create the Ingress ID, and then deploy the application.\n\n## Put your CI/CD pipelines on autopilot with GitLab and Kubernetes\n\nIn just a few minutes, Eddie was able to demonstrate two different ways to build a CI/CD pipeline by using GitLab and Kubernetes. While our Auto DevOps feature makes it so you don’t have to create a bunch of YAMLs from scratch (because, let’s face it, if you’re running Kubernetes you’re already running a ton of YAMLs), our open source Auto DevOps process makes it possible to pick and choose which components or jobs you’d like to run.\n\nWatch the entire video from GitLab Commit Brooklyn to see Eddie run a **third** CI/CD pipeline during his 17-minute talk.\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/-shvwiBwFVI\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\nLike what you see? [Join us in London](/events/commit/) on October 9 for our second GitLab Commit event with all new talks!\n",[9,727],"cloud native",{"slug":729,"featured":6,"template":688},"building-a-cicd-pipeline-in-20-mins","content:en-us:blog:building-a-cicd-pipeline-in-20-mins.yml","Building A Cicd Pipeline In 20 Mins","en-us/blog/building-a-cicd-pipeline-in-20-mins.yml","en-us/blog/building-a-cicd-pipeline-in-20-mins",{"_path":735,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":736,"content":742,"config":749,"_id":751,"_type":13,"title":752,"_source":15,"_file":753,"_stem":754,"_extension":18},"/en-us/blog/building-build-images",{"title":737,"description":738,"ogTitle":737,"ogDescription":738,"noIndex":6,"ogImage":739,"ogUrl":740,"ogSiteName":675,"ogType":676,"canonicalUrls":740,"schema":741},"Getting [meta] with GitLab CI/CD: Building build images","Let's talk about building build images with GitLab CI/CD. The power of Docker as a build platform is unleashed when you get meta.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678567/Blog/Hero%20Images/building-blocks.jpg","https://about.gitlab.com/blog/building-build-images","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Getting [meta] with GitLab CI/CD: Building build images\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Brendan O'Leary\"}],\n        \"datePublished\": \"2019-08-28\",\n      }",{"title":737,"description":738,"authors":743,"heroImage":739,"date":745,"body":746,"category":683,"tags":747},[744],"Brendan O'Leary","2019-08-28","> An alternative title for this post could have been:\n\n>\n\n> I heard you liked Docker, so I put\n[dind](https://hub.docker.com/_/docker/).\n\n\n## Getting started\n\nIt should be clear by now that I love building stuff with GitLab CI/CD. From\n\n[DNS](https://medium.com/gitlab-magazine/ci-cd-all-the-things-pihole-625a0ceaf12)\n\nto [breakfast](/blog/introducing-auto-breakfast-from-gitlab/) GitLab CI/CD\n\noffers a pretty wide range. However, past those \"fun\" use cases, I also like\n\nto share some ~~best~~ practices I have acquired during my years of using\n[GitLab\n\nCI/CD](/solutions/continuous-integration/), both for software and\nnon-software projects alike.\n\n\nI crossed out \"best\" above because I don't really like the term \"best\npractices.\" It\n\nimplies that there is only one right answer to a given question – which is\nthe\n\nopposite of the point of computer science. Sure there are better and worse\nways to\n\ndo something – but like many things in life, you have to find what works for\n\nyou. \"[The best camera is the one you have with\nyou](https://www.amazon.com/Best-Camera-One-Thats-You/dp/0321684788)\"\n\ncomes to mind when building CI/CD for projects. Something that works is\nbetter than something that's pretty.\n\n\nBut, enough of my digression, let's get to the practice I wanted to share in\nthis\n\npost: Building build images as part of the build process. Yes, it is\nprecisely as meta as it sounds.\n\n\n## Why?\n\n\nOften when building a particular project, you may have several unique build\ndependencies.\n\nIn many languages, package managers solve for the majority if not all of\nthese\n\ndependencies – at least for build time (think [npm](https://www.npmjs.com),\n[RubyGems](https://rubygems.org/),\n\n[Maven](https://maven.apache.org/what-is-maven.html)). However, when we are\nbuilding and\n\ndeploying (CI/**CD** let's remember) from a machine that is not our own,\nthat may not\n\nbe enough. There may be a few dependencies we might need from elsewhere.\n\n\nThe language libraries themselves are one such dependency – to build Java\nI'm going to need\n\nthe JDK or JRE. To build Node, I'll need... well Node, etc. In a\nDocker-based environment,\n\nthose languages and dependencies typically have an official image on Docker\n\nHub ([JRE from Oracle](https://hub.docker.com/_/oracle-serverjre-8) or\n\n[Node from Node.js](https://hub.docker.com/_/node) for instance). Assume,\nhowever, that\n\nI may need a few other things not included in **either** those official\nDocker images or\n\nthe package manager I'm using. For instance, maybe I need a CLI tool for\n\ndeploy ([AWS](https://aws.amazon.com/cli/),\n[Heroku](https://devcenter.heroku.com/articles/heroku-cli),\n\n[Firebase](https://firebase.google.com/docs/cli), etc.). We also might need\na testing\n\nframework or tool like [Selenium](https://www.seleniumhq.org) or\n\n[headless\nChrome](https://developers.google.com/web/updates/2017/04/headless-chrome).\n\nOr I may need other tools for packaging, testing, or deployment.\n\n\nSometimes there is a Docker image on Docker Hub for these combinations – or\nsome of\n\nthem – but not always a maintained version. One easy solution to this could\nbe to\n\njust run the install of the tools before every job that needs it. This can\n\neven be \"automated\" using something like\n\nthe\n[before_script](https://docs.gitlab.com/ee/ci/yaml/#before_script-and-after_script)\nsyntax.\n\nHowever, this adds time to our pipeline and seems inefficient: Is there a\nbetter way?\n\n\n## Enter the GitLab Docker registry\n\nSince GitLab is a single application for the entire\n[DevOps](/topics/devops/) lifecycle – it actually\n\nships out of the box with a built-in\n\n[Docker\nregistry](https://docs.gitlab.com/ee/user/packages/container_registry/index.html).\n\nThis can be a useful tool when deploying code in a containerized\nenvironment. We can\n\nbuild our application into a container and send it off into Kubernetes or\nsome\n\nother Docker orchestrator.\n\n\nHowever, I also see this registry as an opportunity to save time in my\n\npipeline (and save round trips to Docker hub and back every time). For\nbuilds that require\n\nsome of these extra dependencies, I like to build a \"build\" Docker image.\n\nThat way, I have an image with all of those baked right in. Then, as part of\nmy\n\npipeline, I can build the image at the start (only when changes are made or\nevery time).\n\nAnd the rest of the pipeline can consume that image as the base image.\n\n\n## Putting it in practice\n\nFor example, let's see what it looks like to build a simple Docker image to\nuse with\n\ndeploying to [Google Firebase](https://firebase.google.com/).\n\n\nFirebase is a \"backend as a service\" tool that provides a database,\nauthentication,\n\nand other services across platforms (web, iOS, and Android). It also\nincludes web hosting\n\nand several other items that can be deployed through [a\nCLI](https://firebase.google.com/docs/cli).\n\nThis tool makes getting started really easy. You can deploy the whole stack\nwith\n\n`firebase deploy.` Alternatively, you can deploy a part (like\n[serverless](/topics/serverless/) functions)\n\nwith a command like `firebase deploy --only functions.`\n\n\nMaking this work in a CI/CD world requires a few extra steps though. We'll\nneed a Node\n\nDocker image that has the firebase CLI in it, so let's make a simple\nDockerfile to do that.\n\n\n> Putting this Dockerfile in `.meta/Dockerfile`\n\n\n```dockerfile\n\nFROM node:10\n\n\nRUN npm install -g firebase-tools\n\n```\n\n\nNext, I'll add a job to the front of my pipeline.\n\n\n> Added to the front of my `.gitlab-ci.yml`\n\n\n```yaml\n\nmeta-build-image:\n  image: docker:stable\n  services:\n    - docker:dind\n  stage: prepare\n  script:\n    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY\n    - cd .meta\n    - docker build -t $CI_REGISTRY/group/project/buildimage:latest .\n    - docker push $CI_REGISTRY/group/project/buildimage:latest\n  only:\n    refs:\n      - main\n    changes:\n      - .meta/Dockerfile\n```\n\n\nLet's break down that job:\n\n1. We use the `docker:stable` image and a service of `docker:dind`\n\n1. The stage is my first stage called `prepare`\n\n1. In the script, we login to the GitLab registry with the built-in\nvariables and build the\n\nimage. For more details see the [GitLab documentation for building Docker\nimages](https://docs.gitlab.com/ee/ci/docker/using_docker_build.html).\n\n1. We only run this on `main` and only when the `.meta/Dockerfile` changes.\nThis makes\n\nsure we are specific about when we change the Docker image. We could also\nuse the\n\ncommit hash or other methods here to make the image more fungible.\n\n\nNow, in further jobs down the pipeline, I can use the latest build of the\nDocker image like this:\n\n\n```yaml\n\nfirestore:\n  image: registry.gitlab.com/group/project/buildimage\n  stage: deploy 🚢🇮🇹\n  script:\n    - firebase deploy --only firestore\n  only:\n    changes:\n      - .firebase-config/firestore.rules\n      - .firebase-config/firestore.indexes.json\n```\n\n\nIn this job, we only run the job if something about\n\nthe [Firestore](https://firebase.google.com/docs/firestore) (the database\nfrom Firebase)\n\nconfiguration changes. And when it does, we run the `firestore deploy`\ncommand in CI. I\n\nalso added a token for deploy as a [GitLab CI/CD\nvariable](https://docs.gitlab.com/ee/ci/variables/)\n\nbased off the Firebase documentation\n\nfor [using firebase with\nCI](https://firebase.google.com/docs/cli#admin-commands).\n\n\n## Summary\n\nIn the end, this helps speed up pipelines by ensuring that you have a\ncustom-built build\n\nimage that you control. You don't have to rely on unstable or unmaintained\nDocker Hub\n\nimages or even have a Docker Hub account yourself to get started.\n\n\nTo learn more about GitLab CI/CD you can [read the GitLab\nwebsite](/solutions/continuous-integration/)\n\nor the [CI/CD docs](https://docs.gitlab.com/ee/ci/introduction/). Also,\nthere's a lot more to\n\nlearn about the [GitLab Docker\nregistry](https://docs.gitlab.com/ee/user/packages/container_registry/index.html).\n\n\nCover image by [Hack\nCapital](https://unsplash.com/@markusspiske?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\non\n[Unsplash](https://unsplash.com/search/photos/build?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText).\n\n{: .note}\n",[108,9,748],"tutorial",{"slug":750,"featured":6,"template":688},"building-build-images","content:en-us:blog:building-build-images.yml","Building Build Images","en-us/blog/building-build-images.yml","en-us/blog/building-build-images",{"_path":756,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":757,"content":763,"config":771,"_id":773,"_type":13,"title":774,"_source":15,"_file":775,"_stem":776,"_extension":18},"/en-us/blog/certificate-based-kubernetes-integration-sunsetting-on-gitlab-com",{"title":758,"description":759,"ogTitle":758,"ogDescription":759,"noIndex":6,"ogImage":760,"ogUrl":761,"ogSiteName":675,"ogType":676,"canonicalUrls":761,"schema":762},"Certificate-based Kubernetes integration sunsetting on GitLab.com","Learn how to check if you are impacted by the sunsetting in May 2026 and the steps needed to migrate to our proposed alternatives, including the GitLab agent for Kubernetes.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749662245/Blog/Hero%20Images/blog-image-template-1800x945__16_.png","https://about.gitlab.com/blog/certificate-based-kubernetes-integration-sunsetting-on-gitlab-com","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Certificate-based Kubernetes integration sunsetting on GitLab.com\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2025-02-17\",\n      }",{"title":758,"description":759,"authors":764,"heroImage":760,"date":766,"body":767,"category":768,"tags":769,"updatedDate":770},[765],"Viktor Nagy","2025-02-17","__*Note: In a previously published version of this article, we stated that the certificate-based Kubernetes integration would be sunset in GitLab 18.0 in May 2025. That timeline has been extended to GitLab 19.0, planned for May 2026. See the [deprecation notice](https://docs.gitlab.com/update/deprecations/#gitlab-self-managed-certificate-based-integration-with-kubernetes) for details.*__\n\nThe certificate-based Kubernetes integration was [deprecated in GitLab November 2021](https://about.gitlab.com/blog/deprecating-the-cert-based-kubernetes-integration/), and is available on GitLab.com only to previous users. In May 2026, the integration will sunset on GitLab.com and will stop working. Customers often use the integration to deploy applications to production and non-production environments. As a result, failure to migrate to other options could cause a critical incident in your application delivery pipelines. This post outlines the alternative features that GitLab offers, points out how you can identify the potential impact on your GitLab.com groups and projects, and offers links to the GitLab documentation to learn more about the necessary migration steps.\n\n## Recommended alternative: The GitLab agent for Kubernetes\n\nThe GitLab agent for Kubernetes represents a significant advancement over the certificate-based integration, offering enhanced security, reliability, and functionality. Here are the key benefits of migrating to the agent-based approach:\n\n### Enhanced security  \n- Eliminates the need for storing cluster credentials in GitLab  \n- Provides secure, bidirectional communication between GitLab and your clusters  \n- Supports fine-grained access control and authorization policies  \n- Enables secure GitOps workflows with pull-based deployments\n\n### Improved reliability  \n- Maintains persistent connections, reducing deployment failures  \n- Handles network interruptions gracefully  \n- Provides better logging and troubleshooting capabilities  \n- Supports automatic reconnection and state recovery\n\n### Advanced features  \n- Real-time cluster information integrated into the GitLab UI  \n- Integration with GitLab CI/CD pipelines  \n- Support for multiple clusters and multi-tenant environments  \n- Enhanced GitOps capabilities by integrating with FluxCD\n\n## Get started with the GitLab agent for Kubernetes\n\nIf you haven't tried the GitLab Agent for Kubernetes yet, we strongly recommend going through the [getting started guides](https://docs.gitlab.com/ee/user/clusters/agent/getting_started). These guides will walk you through the basic setup and help you understand how the agent works in your environment. The hands-on experience will help make the migration process smoother.\n\n## Impact assessment\n\nWe implemented a [dedicated API](https://docs.gitlab.com/ee/api/cluster_discovery.html) endpoint to query all the certificate-based clusters within a GitLab group hierarchy. We recommend starting with this API to see if you have any clusters that need to be migrated.\n\nOnce you identify the clusters, you should:\n1. Find group and project owners using the certificate-based integration.  \n2. Check CI/CD pipelines for direct Kubernetes API calls.  \n3. Identify Auto DevOps projects using the old integration.  \n4. List any GitLab-managed clusters in use.  \n5. Set up the agent in the affected clusters. \n6. Follow the guidance provided in this post and record your progress in a tracking issue.\n\n## Update your CI/CD integration\n\nThe legacy certificate-based integration works using GitLab CI/CD. Because the agent seamlessly integrates with GitLab CI/CD pipelines, you can use it to replace the certificate-based integration with relatively little effort. The agent-based CI/CD integration offers several improvements over the certificate-based approach:\n\n1. **Direct cluster access:** CI/CD jobs can interact with clusters through the agent without requiring separate credentials.  \n2. **Enhanced security:** You don't need to store cluster credentials in CI/CD variables. \n3. **Simplified configuration:** A single agent configuration file manages all cluster interactions.  \n4. **Better performance:** Persistent connections reduce deployment overhead.  \n5. **Flexible authorization:** On GitLab Premium and Ultimate, you can rely on impersonation features to restrict CI/CD jobs in the cluster.\n\nAt a high level, there are three steps to migrating your existing CI/CD pipelines:  \n1. Set up the agent by following [the getting started guides](https://docs.gitlab.com/ee/user/clusters/agent/getting_started).  \n2. [Share the agent connection with the necessary groups and projects.](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#authorize-the-agent). \n3. [Select the agent in the pipeline jobs.](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#update-your-gitlab-ciyml-file-to-run-kubectl-commands)\n\nYou can read more about [migrating Kubernetes deployments in general](https://docs.gitlab.com/ee/user/infrastructure/clusters/migrate_to_gitlab_agent.html) or about [the agent CI/CD integration](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html) in the documentation.\n\n## Migrate your Auto DevOps configuration\n\nAuto DevOps is a set of CI/CD templates that are often customized by users. With Auto DevOps, you can automatically configure your CI/CD pipelines to build, test, and deploy your applications based on best practices. It's commonly used with the certificate-based integration for deploying applications to Kubernetes clusters. \n\nIf you use Auto DevOps and you rely on the certificate-based integration, you need to transition to the agent-based deployment mechanism. The migration process is straightforward:\n1. Set up the CI/CD integration as described above.  \n2. Configure the `KUBE_CONTEXT` environment variable to select an agent.  \n4. Remove the old certificate-based cluster integration.\n\nYou can read more about [using Auto DevOps with the agent for Kubernetes](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html\\#environments-that-use-auto-devops) in the documentation.\n\n## Transition from GitLab-managed clusters to GitLab-managed Kubernetes resources\n\nWith GitLab-managed clusters, GitLab automatically creates and manages Kubernetes resources for your projects. When you allow GitLab to manage your cluster, it creates RBAC resources like a Namespace and ServiceAccount. \n\nIf you use GitLab-managed clusters, you should transition to GitLab-managed Kubernetes resources, which offers a more flexible and secure approach to cluster management.\n\nTo migrate: \n1. Document your existing cluster configuration.  \n2. Create corresponding Kubernetes resource definitions.  \n3. Store configurations in your repository.  \n4. Configure the GitLab agent to manage these resources.  \n5. Verify resource management and deployment. \n6. Remove the old cluster integration.\n\nYou can read more about [GitLab-managed Kubernetes resources](https://docs.gitlab.com/ee/user/clusters/agent/getting\\_started) in the documentation.\n\n## Manage cloud provider clusters created through GitLab\n\nIf you created Kubernetes clusters through the GitLab integration with Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (EKS), these clusters were provisioned in your respective cloud provider accounts. After the certificate-based integration is removed:\n1. Your clusters will remain fully operational in Google Cloud or AWS.  \n2. You will need to manage these clusters directly through your cloud provider's console:  \n   - GKE clusters through Google Cloud Console  \n   - EKS clusters through AWS Management Console\n\nTo view cluster information within GitLab:\n 1. Install the GitLab agent for Kubernetes. \n 1. Configure the Kubernetes dashboard integration.  \n 1. Check the dashboard for cluster details and resource information.\n\nThis change only affects how you interact with the clusters through GitLab – it does not impact the clusters' operation or availability in your cloud provider accounts.\n\nYou should still migrate your deployment setups as described above.\n\n## What should I do next?\n\nTo minimize the impact to you and your infrastructure, you should follow these steps:\n1. Check if you are impacted as soon as possible.  \n2. Plan your migration timeline before May 2026.  \n3. Start with non-production environments to gain experience.  \n4. Document your current setup and desired state.  \n5. Test the agent-based approach in a staging environment.  \n6. Gradually migrate production workloads.  \n7. Monitor and validate the new setup.\n\nThe migration to the GitLab agent for Kubernetes represents a significant improvement in how GitLab interacts with Kubernetes clusters. While the migration requires careful planning and execution, the benefits in terms of security, reliability, and functionality make it a worthwhile investment for your DevSecOps infrastructure.","product",[108,9,768,485],"2025-04-18",{"slug":772,"featured":6,"template":688},"certificate-based-kubernetes-integration-sunsetting-on-gitlab-com","content:en-us:blog:certificate-based-kubernetes-integration-sunsetting-on-gitlab-com.yml","Certificate Based Kubernetes Integration Sunsetting On Gitlab Com","en-us/blog/certificate-based-kubernetes-integration-sunsetting-on-gitlab-com.yml","en-us/blog/certificate-based-kubernetes-integration-sunsetting-on-gitlab-com",{"_path":778,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":779,"content":785,"config":792,"_id":794,"_type":13,"title":795,"_source":15,"_file":796,"_stem":797,"_extension":18},"/en-us/blog/ci-cd-the-ticket-to-multicloud",{"title":780,"description":781,"ogTitle":780,"ogDescription":781,"noIndex":6,"ogImage":782,"ogUrl":783,"ogSiteName":675,"ogType":676,"canonicalUrls":783,"schema":784},"CI/CD: The ticket to multicloud","Read our expert panel from MulticloudCon on how CI/CD and cloud-agnostic DevOps help organizations go multicloud and increase productivity.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749679235/Blog/Hero%20Images/cloud-native-predictions-2019.jpg","https://about.gitlab.com/blog/ci-cd-the-ticket-to-multicloud","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"CI/CD: The ticket to multicloud\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Chrissie Buchanan\"}],\n        \"datePublished\": \"2020-01-17\",\n      }",{"title":780,"description":781,"authors":786,"heroImage":782,"date":788,"body":789,"category":790,"tags":791},[787],"Chrissie Buchanan","2020-01-17","\n\nIn November 2019, we had the opportunity to co-host [MulticloudCon](https://multicloudcon.io/), a zero-day event with our partners at [Upbound](https://upbound.io/). The event featured experts in cloud, Kubernetes, database resources, CI/CD, security, and more, to learn how [multicloud is evolving](/topics/multicloud/) and empowering developers and operations experts across the industry.\n\nDevOps can play a major role in cloud usage. In this discussion from MulticloudCon, we assembled a panel of experts across the industry to talk about [CI/CD](/solutions/continuous-integration/) and DevOps in multiple clouds. As [multicloud](/topics/multicloud/) technology continues to evolve, tools can give organizations more control and flexibility on where their workloads live and where they deploy.\n\n![CI/CD MulticloudCon panelists](https://about.gitlab.com/images/blogimages/multicloudcon-panel.png){: .shadow.medium.center}\n\n## Panel highlights\n\n### Why multicloud is important:\n\n> “If we have a single point of failure on a cloud, it is really easy to have some downtime [or] an outage and be like, \"Well, it was my cloud provider's fault.\" But, to our customers, that doesn't matter. You as a company, we're down and that affects them.”\n– Ana Medina, Chaos Engineer at [Gremlin](https://www.gremlin.com/)\n\n> “There are a lot more applications now that are becoming event-driven and are relying on integrations with cloud providers. And if it's more than one, you can't just test on one provider and go well it works across the board. You need to be expanding your test coverage to cover multiple cloud providers.”\n– Denver Williams, DevOps/SRE Consultant at [Vulk Coop](http://vulk.coop/)\n\n\n### The challenges of multicloud:\n\n> “When you're running in multiple clouds, that also introduces problems… I'm talking more specifically about high availability and also fault tolerance and then disaster recovery. These are things people just think about, ‘Oh we need to connect, integrate.’ But at the end of the day, if you're serious about running these applications, you need to also think about those things. And introducing those complexities from the different cloud providers will definitely impact your operations.”\n– Angel Rivera, Developer Advocate at [CircleCI](https://circleci.com/)\n\n\n### How tools impact a multicloud strategy:\n\n> “One thing that helps a lot when you're working on deploys for multicloud is to choose tooling that is going to support multiple clouds off the bat… One thing you really want to avoid, if possible, is ending up with different workflows for different cloud providers. Because then you're testing with different CI/CD pipelines. It's different code and it's inevitably going to behave differently. And then you're going to run into weird bugs.” – Denver Williams\n\n> “When I'm talking to users and GitLab customers that are doing multicloud, they're doing a lot of orchestration and abstraction, and they're having to write an abstraction layer in order to homogenize a logic. A lot of folks have talked about Crossplane today. When I see these types of capabilities and Crossplane in that community emerging, that's pretty exciting because that's what I see a lot of folks writing all the time. That can just be pulled out into a tool and offloaded so that you can focus on the business logic.” – [William Chia](/company/team/#williamchia), Sr. Product Marketing Manager at GitLab\n\nLearn more about GitLab’s Crossplane integration in our [12.5 release](/releases/2019/11/22/gitlab-12-5-released/#crossplane-support-in-gitlab-managed-apps).\n\n\n### CI/CD and multicloud best practices:\n\n> “There's always going to be platform-specific code. Just keep that separate and then your actual YAML logic, keep it agnostic.” – Uma Mukkara, Co-founder and COO at [MayaData](https://mayadata.io/)\n\n> “At Gremlin we help companies avoid downtime. So, we're starting to work with integrations with CI/CD platforms so folks actually start having a stage that they run chaos engineering experiments... You can actually build a lot more testing around past outages that your company has had or maybe some of the large outages that we've seen around in the industry. Building testing around those scenarios, [we’re] making sure the caching layers are able to handle when one of your services goes down... If you're caching layer limits out, the other services that are dependent on it are able to still continue providing a good user experience.” – Ana Medina\n\n> “I always encourage people who are writing pipelines in our platform to do some checks against APIs that they use so that they can just fail their builds right away, instead of wasting money and effort and going to build that. It's going to eventually fail.” – Angel Rivera\n\nMulticloud is made possible through cloud native applications built from containers using services from different cloud providers, and allows for multiple services to be managed in one architecture. CI/CD plays a big role in workflow portability, ensuring workflows stay consistent (no matter where projects are deployed).\n\nWatch the full panel discussion below.\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/Sx02_fyaGgc\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nPhoto by [Marc Wieland](https://unsplash.com/photos/zrj-TPjcRLA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/search/photos/clouds?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\n{: .note}\n\n","insights",[108,727,9],{"slug":793,"featured":6,"template":688},"ci-cd-the-ticket-to-multicloud","content:en-us:blog:ci-cd-the-ticket-to-multicloud.yml","Ci Cd The Ticket To Multicloud","en-us/blog/ci-cd-the-ticket-to-multicloud.yml","en-us/blog/ci-cd-the-ticket-to-multicloud",{"_path":799,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":800,"content":806,"config":816,"_id":818,"_type":13,"title":819,"_source":15,"_file":820,"_stem":821,"_extension":18},"/en-us/blog/cicd-tunnel-impersonation",{"title":801,"description":802,"ogTitle":801,"ogDescription":802,"noIndex":6,"ogImage":803,"ogUrl":804,"ogSiteName":675,"ogType":676,"canonicalUrls":804,"schema":805},"Fine-grained permissions with impersonation in CI/CD tunnel","Learn how to use use fine-grained permissions via generic impersonation in CI/CD Tunnel","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749667435/Blog/Hero%20Images/tunnel.jpg","https://about.gitlab.com/blog/cicd-tunnel-impersonation","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to use fine-grained permissions via generic impersonation in CI/CD Tunnel\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Cesar Saavedra\"}],\n        \"datePublished\": \"2022-02-01\",\n      }",{"title":807,"description":802,"authors":808,"heroImage":803,"date":810,"body":811,"category":683,"tags":812},"How to use fine-grained permissions via generic impersonation in CI/CD Tunnel",[809],"Cesar Saavedra","2022-02-01","\nThe [CI/CD Tunnel](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html), which leverages the [GitLab Agent for Kubernetes](https://docs.gitlab.com/ee/user/clusters/agent/), enables users to access Kubernetes clusters from GitLab CI/CD jobs. In this blog post, we review how you can securely access your clusters from your CI/CD pipelines by using generic impersonation. In addition, we will briefly cover the activity list of the GitLab Agent for Kubernetes, a capability recently introduced by GitLab, that can help you detect and troubleshoot faulty events.\n\n## Using impersonation with your CI/CD tunnel\n\nThe CI/CD Tunnel leverages the GitLab Agent for Kubernetes, which permits the secure connectivity between GitLab and your Kubernetes cluster without the need to expose your cluster to the internet and outside your firewall. The CI/CD Tunnel allows you to connect to your Kubernetes cluster from your CI/CD jobs/pipelines.\n\nBy default, the CI/CD Tunnel inherits all the permissions from the service account used to install the Agent in the cluster. However, fine-grained permissions can be used in conjunction with the CI/CD Tunnel to restrict and manage access to your cluster resources.\n\nFine-grained permissions control with the CI/CD tunnel via impersonation:\n\n- Allows you to leverage your K8s authorization capabilities to limit the permissions of what can be done with the CI/CD tunnel on your running cluster\n\n- Lowers the risk of providing unlimited access to your K8s cluster with the CI/CD tunnel\n\n- Segments fine-grained permissions with the CI/CD tunnel at the project or group level\n\n- Controls permissions with the CI/CD tunnel at the username or service account\n\nTo restrict access to your cluster, you can use impersonation. To specify impersonations, use the access_as attribute in your Agent's configuration file and use Kubernetes RBAC rules to manage impersonated account permissions.\n\nYou can impersonate:\n- The Agent itself (default)\n= The CI job that accesses the cluster\n- A specific user or system account defined within the cluster\n\n## Steps to exercise impersonation with the CI/CD Tunnel\n\nLet's go through the steps on how you can exercise impersonation with the CI/CD Tunnel.\n\n### Creating your Kubernetes cluster\n\nIn order to exercise the capabilities described above, we need a Kubernetes cluster. Although, you can use any Kubernetes distribution, for this example, we create a GKE Standard Kubernetes cluster and name it \"csaavedra-ga4k-cluster\". We select the zone and version 1.21 of Kubernetes and ensure that our cluster will have three nodes. We leave the security and metadata screens with their defaulted values and click on the create button:\n\n![Creating a GKE cluster](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/0-gke-creation.png){: .shadow.medium.center.wrap-text}\nCreating a GKE cluster\n{: .note.text-center}\n\n### Sample projects to be used\n\nLet's proceed now to this [top-level group](https://gitlab.com/tech-marketing/sandbox/gl-14-5-cs-demos), which contains three projects, which we will use to show impersonation with the CI/CD tunnel. You can do this at the project or group level. In this example, we will show setting impersonation at the project level:\n\n![Project structure in GitLab](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/1-project-struct.png){: .shadow.medium.center.wrap-text}\nProject structure in GitLab\n{: .note.text-center}\n\nProject \"ga4k\" will configure the GitLab Agent for Kubernetes and also set impersonations with the CI/CD tunnel. Project \"sample-application\" will use the CI/CD tunnel, managed by the agent, to connect to the Kubernetes cluster and execute a pipeline using different impersonations. Project \"cluster-management\" will also use the CI/CD tunnel to connect to the cluster and install the Ingress application on it.\n\nNot only does the CI/CD tunnel streamline the deployment, management, and monitoring of Kubernetes-native applications, but it also does it securely and safely by using impersonations that leverage your Kubernetes cluster's RBAC rules.\n\nProject \"ga4k\" contains and manages the configuration for the GitLab Agent for K8s called \"csaavedra-agentk\". Looking at its \"config.yaml\" file, we see that the agent points to itself for manifest projects, but most importantly, it provides CI/CD tunnel access to two projects: \"sample-application\" and \"cluster-management\". This means that these two projects' CI/CD pipelines will have access to the K8s cluster that the agent is securely connected to:\n\n![The GitLab Agent for K8s configuration](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/2-agent-config.png){: .shadow.medium.center.wrap-text}\nThe GitLab Agent for K8s configuration\n{: .note.text-center}\n\nProject \"sample-application\" has a pipeline, which we will later execute under different impersonations. And project \"cluster-management\" has a pipeline that will install only the Ingress application on the Kubernetes cluster, as configured in its helmfile.yaml file:\n\n![Deployable applications in cluster-management project](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/3-cluster-mgmt-helmfile.png){: .shadow.medium.center.wrap-text}\nDeployable applications in cluster-management project\n{: .note.text-center}\n\n### Connecting the Agent to your Kubernetes cluster\n\nLet's head back to project \"ga4k\" and connect to the Kubernetes cluster via the agent. We select agent \"csaavedra-agentk\" to register with GitLab:\n\n![List of defined agents](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/4-agents-popdown.png){: .shadow.medium.center.wrap-text}\nList of defined agents\n{: .note.text-center}\n\nThis step generates a token that we can use to install the agent on the cluster. We copy the Docker command to our local desktop for later use. Notice that the command includes the generated token, which you can also copy:\n\n![Docker command to deploy agent to your K8s cluster](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/5-docker-cmd.png){: .shadow.medium.center.wrap-text}\nDocker command to deploy agent to your K8s cluster\n{: .note.text-center}\n\nFrom a local command window, we ensure that our connectivity parameters to GCP are correct:\n\n![Checking your GCP connectivity parameters](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/6-gcp-connectivity.png){: .shadow.medium.center.wrap-text}\nChecking your GCP connectivity parameters\n{: .note.text-center}\n\nWe then add the credentials to our kubeconfig file to connect to our newly created Kubernetes cluster \"csaavedra-ga4k-cluster\" and verify that our context is set to it:\n\n![Adding your cluster credentials to your kubeconfig](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/7-adding-creds.png){: .shadow.medium.center.wrap-text}\nAdding the credentials of your cluster to your kubeconfig\n{: .note.text-center}\n\nOnce this is done, we can list all the pods that are up and running on the cluster by entering `kubectl get pods –all-namespaces`:\n\n![Listing the pods in your running cluster](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/8-listing-pods.png){: .shadow.medium.center.wrap-text}\nListing the pods in your running cluster\n{: .note.text-center}\n\nFinally, we paste the docker command that will install the GitLab Agent for Kubernetes to this cluster making sure that its namespace is \"ga4k-agent\":\n\n![Deploying the agent to your K8s cluster](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/9-pasted-docker-cmd.png){: .shadow.medium.center.wrap-text}\nDeploying the agent to your K8s cluster\n{: .note.text-center}\n\nWe list the pods one more time to check that the agent pod is up and running on the cluster:\n\n![Agent up and running on your K8s cluster](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/10-agent-up.png){: .shadow.medium.center.wrap-text}\nAgent up and running on your K8s cluster\n{: .note.text-center}\n\nThe screen will refresh and show our Kubernetes cluster connected via the agent:\n\n![Agent connected to your K8s cluster](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/11-agent-connected.png){: .shadow.large.center.wrap-text}\nAgent connected to your K8s cluster\n{: .note.text-center}\n\n### The Agent's Activity Information page\n\nClicking on the agent name takes us to the Agent's Activity Information page, which lists agent events in real time. This information can help monitor your cluster's activity and detect and troubleshoot faulty events from your cluster. Connection and token information is currently listed with more events coming in future releases:\n\n![Agent activity information page](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/12-agent-activity.png){: .shadow.small.center.wrap-text}\nAgent activity information page\n{: .note.text-center}\n\n### Deploying Ingress to your Kubernetes cluster using default impersonation\n\nBy default, the CI/CD Tunnel inherits all the permissions from the service account used to install the agent in the cluster. Per the agent's configuration, the CI/CD pipelines of the \"cluster-management\" project will have access to the K8s cluster that the agent is securely connected to. Let's leverage this connectivity to deploy the Ingress application to the Kubernetes cluster from project \"cluster-management\". Let's make a small update to the project pipeline to launch it. Once the pipeline launches, we navigate to its detail view to track its completion:\n\n![Project \"cluster-management\" pipeline completed](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/13-cluster-mgmt-pipeline.png){: .shadow.small.center.wrap-text}\nProject \"cluster-management\" pipeline completed\n{: .note.text-center}\n\nand check the log of its **apply** job to verify that it was able to switch to the agent's context and successfully ran all the installation steps:\n\n![Ingress deployed to your cluster via CI/CD Tunnel using default impersonation](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/14-apply-job-log.png){: .shadow.medium.center.wrap-text}\nIngress deployed to your cluster via CI/CD Tunnel using default impersonation\n{: .note.text-center}\n\nFor further verification, we list the pods in the cluster and check that the ingress pods are up and running:\n\n![Ingress pods up and running](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/15-ingress-pods-up.png){: .shadow.medium.center.wrap-text}\nIngress pods up and running on your cluster\n{: .note.text-center}\n\n### Start trailing the agent's log file to watch updates\n\nBefore we start the impersonation use cases, let's start trailing the agent's log file from a command window:\n\n![Trailing agent log from the command line](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/16-trail-agent-log.png){: .shadow.medium.center.wrap-text}\nTrailing agent log from the command line\n{: .note.text-center}\n\nAnd also let's increase its logging to debug:\n\n![Increasing the agent log level to debug](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/17-agent-logging-level.png){: .shadow.medium.center.wrap-text}\nIncreasing the agent log level to debug\n{: .note.text-center}\n\n### Running impersonation using access_as:ci_job\n\nLet's now impersonate the CI job that accesses the cluster. For this, we modify the agent's configuration and add the \"access_as\" attribute with the \"ci_job\" tag under it:\n\n![Impersonating the CI job](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/18-ci-job-impersonation.png){: .shadow.medium.center.wrap-text}\nImpersonating the CI job\n{: .note.text-center}\n\nAs we save the updated configuration, we verify in the log output that the update has taken place in the running agent:\n\n![Agent updated with CI job impersonation](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/19-agent-conf-updated.png){: .shadow.large.center.wrap-text}\nAgent updated with CI job impersonation\n{: .note.text-center}\n\nNotice that the pipeline of the \"sample-application\" project has a test stage and a test job. It sets the variable KUBE_CONTEXT first, loads an image with the version of kubectl that matches the version of the K8s cluster, and executes two kubectl commands that access the remote cluster via the agent:\n\n![Project \"sample-application\" pipeline](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/20-sample-application-pipeline.png){: .shadow.medium.center.wrap-text}\nProject \"sample-application\" pipeline\n{: .note.text-center}\n\nWe manually execute the pipeline of the \"sample-application\" project and verify in the job log output that the context switch was successful and that the kubectl commands executed correctly:\n\n![Job log output with CI impersonation](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/21-ci-impersonation-job-log.png){: .shadow.medium.center.wrap-text}\nJob log output with CI impersonation\n{: .note.text-center}\n\n### Running impersonation using access_as:impersonate:username\n\nThe last use case is the impersonation of a specific user or system account defined within the cluster. I have pre-created a service account called \"jane\" on the Kubernetes cluster under the \"default\" namespace. And \"jane\" has been given the permission to do a \"get\", \"list\", and \"watch\" on the cluster pods as you can see by the output in the command window:\n\n![Jane user with permission to list pods](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/22-jane-and-perms.png){: .shadow.medium.center.wrap-text}\nJane user with permission to list pods\n{: .note.text-center}\n\nRemember that the service account \"gitlab-agent\" under namespace \"ga4k-agent\" was created earlier when we installed the agent by running the Docker command. In order for the agent to be able to impersonate another service account or user, it needs to have the permissions to do so. We do this by creating a clusterrole \"impersonate\" for impersonating users, groups, and service accounts, and then create a clusterrolebinding \"allowimpersonator\" to give these permissions for the \"default\" namespace to the agent \"gitlab-agent\" in the \"ga4k-agent\" namespace:\n\n![Giving impersonation permission to agent](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/23-clusterrole-perm-to-agent.png){: .shadow.large.center.wrap-text}\nGiving impersonation permission to agent\n{: .note.text-center}\n\nWe then edit the agent's configuration and add the \"impersonate\" attribute and provide the service account for \"jane\" as the parameter for the \"username\" tag:\n\n![Impersonating a specific user](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/24-user-impersonation.png){: .shadow.medium.center.wrap-text}\nImpersonating a specific user called jane\n{: .note.text-center}\n\nAs we commit the changes, we check the log output to verify that the update has taken place in the running agent:\n\n![Agent updated with user impersonation](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/25-agent-conf-updated.png){: .shadow.large.center.wrap-text}\nAgent updated with user impersonation\n{: .note.text-center}\n\nSince we know that \"jane\" has the permission to list the running pods in the cluster, let's head to the project \"sample-application\" pipeline and add the command \"kubectl get pods –all-namespaces\" to it:\n\n![Adding get pods command that jane is allowed to run](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/26-adding-get-pods-cmd.png){: .shadow.medium.center.wrap-text}\nAdding get pods command that jane is allowed to run\n{: .note.text-center}\n\nWe commit the update and head over to the running pipeline and drill into the \"test\" job log output to see that the context switch was successful and that the kubectl commands executed correctly, including the listing of the running pods in the cluster:\n\n![Job output for pipeline impersonation jane](https://about.gitlab.com/images/blogimages/cicd-tunnel-impersonate/27-user-impersonation-job-log.png){: .shadow.medium.center.wrap-text}\nJob output for pipeline impersonation jane\n{: .note.text-center}\n\n## Conclusion\n\nIn this blog post, we reviewed how you can securely access your Kubernetes clusters from your CI/CD pipelines by using generic impersonation.  In addition, we showed the activity list of the GitLab Agent for Kubernetes, which can help you detect and troubleshoot faulty events from your cluster.\n\nTo see these capabilities in action, check out the following video:\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/j8SJuHd7Zsw\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nCover image by Jakob Søby on [Unsplash](https://www.unsplash.com)\n{: .note}\n",[813,814,815,9],"releases","CI","CD",{"slug":817,"featured":6,"template":688},"cicd-tunnel-impersonation","content:en-us:blog:cicd-tunnel-impersonation.yml","Cicd Tunnel Impersonation","en-us/blog/cicd-tunnel-impersonation.yml","en-us/blog/cicd-tunnel-impersonation",{"_path":823,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":824,"content":830,"config":836,"_id":838,"_type":13,"title":839,"_source":15,"_file":840,"_stem":841,"_extension":18},"/en-us/blog/cloud-native-storage-beginners",{"title":825,"description":826,"ogTitle":825,"ogDescription":826,"noIndex":6,"ogImage":827,"ogUrl":828,"ogSiteName":675,"ogType":676,"canonicalUrls":828,"schema":829},"A guide to cloud native storage for beginners","Choosing a cloud native development strategy is a smart step in DevOps, but storage can be a challenge. Here’s what you need to consider.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681560/Blog/Hero%20Images/cloudnative.jpg","https://about.gitlab.com/blog/cloud-native-storage-beginners","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"A guide to cloud native storage for beginners\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Valerie Silverthorne\"}],\n        \"datePublished\": \"2020-09-10\",\n      }",{"title":825,"description":826,"authors":831,"heroImage":827,"date":832,"body":833,"category":790,"tags":834},[680],"2020-09-10","\n\n[DevOps](/topics/devops/) and cloud native go hand-in-hand but that doesn’t mean the journey is straightforward, particularly when it comes to storage. Here’s everything you need to know about cloud-native storage if you’re just getting started. \n\n## What is cloud-native software development?\n\nBoiled down, the term [cloud native](/topics/cloud-native/) simply means taking advantage of the power of the cloud and doing so from the beginning of the software development lifecycle. Flexibility, speed, and “always on” capabilities make the cloud an ideal place for [modern software development](https://www.infoworld.com/article/3281046/what-is-cloud-native-the-modern-way-to-develop-software.html).\n\nAlthough [containers aren’t limited to just the cloud](https://containerjournal.com/features/what-do-containers-have-to-do-with-being-cloud-native-anyway/), they are a key part of cloud native software development because they make it simple to move chunks of code from cloud to cloud using the same set of tools and processes. Containers can be created, moved or deleted with just the click of a mouse. [Kubernetes](/solutions/kubernetes/) is an increasingly popular open source tool for managing containers.\n\n## Why storage is the stumbling block\n\nSo far, so good, but what about storage? The features that make containers so ideal for cloud native (flexible, portable, disposable) are the same things that make them a storage nightmare. Developers finished with containers can just kill them – but for most apps to work, they need access to reliable storage that can’t be eliminated. \n\nAnd that’s the big hiccup when it comes to cloud native storage, says [Brendan O’Leary](/company/team/#brendan), senior developer evangelist at GitLab. “Almost every app in existence needs database storage,” Brendan explains. “But in a cloud native world things come and go but storage can’t do that. Storage has to stick around and solving for that is the hardest part of cloud native. That’s the thing we need to conquer next.”\n\nThe [Cloud Native Computing Foundation](https://www.cncf.io/) says the goal is to create [\"persistent information\"](https://www.cncf.io/blog/a-complete-storage-guide-for-your-kubernetes-storage-problems/) that exists no matter what’s going on around it. Ideally the CNCF recommends that information not be stored in what it calls \"volatile\" containers.\n\n## Solutions on the horizon\n\nThe good news is that a number of companies are trying to solve the tricky problem of cloud native storage. Here’s a quick look in no particular order (Cockroach and Rancher are GitLab partners):\n\n* [OpenEBS]( https://openebs.io) is a Kubernetes-based tool to create stateful applications using Container Attached Storage.\n* Also Kubernetes-based, [Rook](https://rook.io) offers self-managed, scaling, and healing storage services.\n* [Cockroach Labs](https://www.cockroachlabs.com/) uses Distributed SQL to make databases portable and scalable.\n* [Rancher Longhorn](https://longhorn.io) offers persistent storage for Kubernetes.\n\n## Final considerations\n\nA Gartner Group report, “Top Emerging Trends in Cloud-Native Infrastructure”, advises clients to “choose storage solutions aligned with container-native data service requirements and the standard storage interface, [Container Storage Interface (CSI)](https://www.architecting.it/blog/container-storage-interface/). CSI is an API that lets container orchestration platforms like Kubernetes seamlessly communicate with stored data via a plug-in. \n\nAnd finally, there’s no shame in choosing something straightforward, Brendan suggests, particularly if you’re just getting started in the Kubernetes world. “You can go with a cloud provider’s data storage options,” he says. “That’s still cloud native but it’s even simpler to just use the tools that exist. Don’t try to reinvent the wheel.”\n\nCover image by [Joshua Coleman](https://unsplash.com/@joshstyle) on [Unsplash](https://unsplash.com)\n{: .note}\n",[727,9,835],"open source",{"slug":837,"featured":6,"template":688},"cloud-native-storage-beginners","content:en-us:blog:cloud-native-storage-beginners.yml","Cloud Native Storage Beginners","en-us/blog/cloud-native-storage-beginners.yml","en-us/blog/cloud-native-storage-beginners",{"_path":843,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":844,"content":850,"config":856,"_id":858,"_type":13,"title":859,"_source":15,"_file":860,"_stem":861,"_extension":18},"/en-us/blog/cncf-five-technologies-to-watch-in-2021",{"title":845,"description":846,"ogTitle":845,"ogDescription":846,"noIndex":6,"ogImage":847,"ogUrl":848,"ogSiteName":675,"ogType":676,"canonicalUrls":848,"schema":849},"CNCF's 5 technologies to watch in 2021","We predict how CNCF's five tech trends to watch will impact cloud native and the tech industry over the next year and beyond.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749680997/Blog/Hero%20Images/clouds-cover.jpg","https://about.gitlab.com/blog/cncf-five-technologies-to-watch-in-2021","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"CNCF's 5 technologies to watch in 2021\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Brendan O'Leary\"}],\n        \"datePublished\": \"2020-11-24\",\n      }",{"title":845,"description":846,"authors":851,"heroImage":847,"date":852,"body":853,"category":790,"tags":854},[744],"2020-11-24","\n\nLast week the Cloud Native Computing Foundation (CNCF) held [KubeCon + CloudNativeCon North America](https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/). Even with conferences shifting from in-person to virtual, KubeCon still draws huge crowds and the entire industry's attention. Besides being one of the largest tech conferences of the year, KubeCon continues to show the cutting edge of technology at the forefront of the industry.\n\nToward the conclusion of the conference, [Liz Rice](https://www.cncf.io/spotlights/cncf-community-leader-spotlight-liz-rice/) - chairperson of the CNCF's Technical Oversight Committee (TOC) and VP of Open Source Engineering at Aqua Security - got on the virtual stage to share where the CNCF is going in the coming year and to talk about predictions for the industry as a whole. These predictions covered a vast landscape of new and emerging technologies and ideas. Some of the ideas are entirely within the bounds of the cloud native community, like service mesh, while others, like WebAssembly and eBPF, have even broader impact inside and outside of cloud native technology.\n\nIn the six years since the initial release of Kubernetes, the cloud native landscape has seen a proliferation of technologies and projects related to Kubernetes and cloud native in general. Rice even talks about this in [her closing remarks](https://kccncna20.sched.com/event/eoIl/keynote-predictions-from-the-technical-oversight-committee-toc-liz-rice-cncf-toc-chair-vice-president-open-source-engineering-aqua-security), discussing the much loved and much talked about CNCF landscape. After adding many more graduated projects this year, one of the first predictions is that the coming year will see some current sandboxed projects at the CNCF fail. As Rice explains, this is a natural consequence of the CNCF pushing for innovation because not every innovative project will find a use case in the \"real world\" that justifies the effort of bringing it to market alongside juggernauts like Kubernetes, Envoy, and etcd.\n\n## CNCF's 2021 predictions\n\nOne of the most exciting segments was Rice's five predictions for the technology industry at large - inside and outside of cloud native technologies. These five technologies to watch (or six depending on how you count them) span several emerging technology platforms and speak to the great diversity of needs and projects in the open source community. The TOC's five technology trends to watch include:\n\n1. Chaos engineering\n2. Kubernetes for the edge\n3. Service mesh\n4. Web assembly and eBPF\n5. The developer and operator experience\n\n{::options parse_block_html=\"false\" /}\n\n\u003Cdiv class=\"center\">\n\n\u003Cblockquote class=\"twitter-tweet\">\u003Cp lang=\"en\" dir=\"ltr\">Wdyt? What did we miss? \u003Ca href=\"https://t.co/ErA8jZ6lsS\">https://t.co/ErA8jZ6lsS\u003C/a>\u003C/p>&mdash; Liz Rice at KubeCon + CloudNativeCon 🇪🇺 (@lizrice) \u003Ca href=\"https://twitter.com/lizrice/status/1329867030284144640?ref_src=twsrc%5Etfw\">November 20, 2020\u003C/a>\u003C/blockquote> \u003Cscript async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\">\u003C/script>\n\n\u003C/div>\n\n## Chaos engineering\n\nThe systems and applications we build are getting more and more complex and the human ability to accurately reason about how each component will interact and react becomes harder or impossible. [Chaos engineering](https://en.wikipedia.org/wiki/Chaos_engineering), first proposed and famously [practiced by Netflix's engineering team](https://netflixtechblog.com/tagged/chaos-engineering), takes that change to heart and accepts that complex enough systems are genuinely unpredictable. Once you've understood this aspect of complex systems, the best way to test and reason about their reliability is to perform experiments that best represent real-life, unpredictable events.\n\nWhile the concept of \"turn off a component and see how the system as a whole reacts\" makes sense on the surface, implementing such a methodology, especially in a large enterprise organization, can be daunting. Many projects and more than a few companies have been created to deal with this problem. It will be interesting to see if chaos engineering can move from the \"elite\" technology performers into a more mainstream engineering organization of every size and maturity level.\n\nAt GitLab, we have many customers already experimenting with or practicing chaos engineering. [Uma Mukkara](https://in.linkedin.com/in/uma-mukkara) and [Karthik Satchitanand](https://in.linkedin.com/in/karthik-satchitanand) from Maya Data presented on Chaos Engineering using GitLab templates and LitmusChaos at GitLab Commit in Brooklyn in 2019. We're also considering the many ways that chaos engineering could be more [deeply integrated](https://gitlab.com/groups/gitlab-org/-/epics/381) into GitLab as part of a single [DevOps](/topics/devops/) platform. Watch the video from Uma nad Karthik's GitLab Commit Brooklyn presentation below.\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/ezhSg-t-PPM\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n## Kubernetes for the edge\n\nEdge computing refers to an area of cloud computing where the infrastructure for computing, storage, and other requirements need to be placed in the field closer to users or their use cases. While cloud computing helps to centralize and create large data centers that benefit from scale, many if not most interactions with users occur far away from the data center and instead move to the edge.\n\nAs Kubernetes matures and transforms compute in the data center, more use cases for the core tenants of Kubernetes will emerge. And as those use cases expand in scope, we will continue to see new distributions or plugins to the Kubernetes ecosystem to support new use cases. Projects like [KubeEdge](https://kubeedge.io/en/), [K3s](https://k3s.io/), and others, bring the Kubernetes API and extensibility to more devices, even those on the edge.\n\nWith the onslaught of data, devices, and demand for performance, edge computing has become an essential component of many organizations' overall network topology. Bringing the flexibility and power of Kubernetes compute and processing options to this problem will continue to expand in the coming year. For example, there may even be a Kubernetes cluster running [in your car](https://www.youtube.com/watch?v=zmuOxFp3CAk&feature=emb_title) today.\n\n## Service mesh\n\nRice predicts service mesh will be a hot topic in 2021, and with good reason. There has been an explosion of service mesh projects, discussions, and drama throughout the cloud native community in the past year. There has been an enormous proliferation of service mesh projects and teams discussing how a service mesh can benefit their deployments in 2020.\n\nSimilar to chaos engineering, service mesh attempts to organize the growing complexity of systems into a clear and reasonable package. As teams move to a [microservices approach](/topics/microservices/) for application delivery, understanding the interaction and links between existing and new services becomes critical. Service mesh projects like [Istio](https://istio.io/), [Linkerd](https://linkerd.io/), and [Consul](https://www.consul.io/) have cropped up in the past few years. These tools help discover both known and new services and their connections. The goal of the projects is to create signal from noise, allowing humans to understand how those services interact and depend on one another.\n\nIn 2020, there was a lot of drama and discussion around the overall benefits and drawbacks of service mesh and the specific projects used to implement it. Now that there is a greater understanding among CNCF stakeholders about service mesh, we can expect the cloud native community to settle into a clear set of recommendations about when it is appropriate to implement a service mesh and how to make the right decisions about service mesh for your organization.\n\nThe most significant trend here will be with the ability of service mesh to not only discover services but secure them through policy enforcement. Additionally, the desire for observability will drive service meshes to become a critical cornerstone of observability in microservices environments.\n\n## Web assembly and eBPF\n\nIn this prediction, Rice rightly points out that the technologies of web assembly and eBPF are not - on the surface - related. [Web assembly](https://webassembly.org/), also called Wasm, is a new type of virtual machine brought to the browser. [eBPF](https://ebpf.io/) is a programmable interface for interacting with the Linux kernel. So why did the TOC and Rice decide to include these two different technologies in one prediction?\n\nWell, they share a common goal of sandboxing code when it runs. Sandboxing code, which means segmenting it from the parts of memory and the computer it doesn't need to get its job done, is a critical step toward allowing for secure code execution even of unknown sources. In the case of web assembly, that code is running in your browser. For eBPF, it could be running on a shared cloud-based Linux host. In both cases, these tools enable providers and security teams to effectively protect their code and data from prying eyes. This will remain a key objective for engineering teams for years to come, because we need to segment code better from a security perspective.\n\n### Securing code by segmenting processes\n\nMany of the most massive zero-day attacks we've seen in the past few years demonstrate that some traditional pieces of the stack that we \"take for granted\" should instead be prioritized. Today, the barriers of the application memory or even CPU space are still ripe for attack. So inventing new and more secure ways of segmenting processes from one another will be a trend to watch for in 2021 and beyond.\n\nAt GitLab we see security and protection as belonging to the same DevOps lifecycle as the rest of engineering. The [Secure](/stages-devops-lifecycle/secure/) and [Protect](/stages-devops-lifecycle/govern/) stages of the DevOps lifecycle will continue to impact the rest of the cycle and how engineering departments develop and release code faster and more securely. We will see continued consolidation throughout the industry to bring security and protection initiatives to the forefront of every developer's mind, enabling developers and security professionals alike to deploy with confidence.\n\n## The developer and operator experience\n\nSimilar to prioritizing function over UX, our own experience in developing, deploying, and maintaining our projects often takes a back seat to \"getting the job done.\" However, in much the same way, the developer experience and operator experience in their day-to-day tasks will be a key focus as technologies like Kubernetes enter a more mature phase.\n\nWe've already seen colossal consolidation and focus on the DevOps platform as a whole. It was just a year or two ago that we grudgingly accepted a disjointed set of poorly integrated tools, seeing it as unavoidable. Today, we see many DevOps companies and teams selling [enterprise tools](/enterprise/) that are focusing on improving the dev and ops experience by building more capability into our devices and bringing together a more [complete DevOps platform](/solutions/devops-platform/).\n\nThis is a mission that is obviously near and dear to our hearts at GitLab. Next year will bring a renewed focus on the dev and ops experience as more companies settle into the new normal of collaborating with teammates remotely, asynchronously, and automatically. This focus makes the DevOps platform we choose all the more critical to our engineering team's success, and as software defines the world we live in even more by the day, our organizations' overall success.\n\nDevelopers and operators will come to expect an integrated DevOps platform that allows for the dual goals of getting software build and shipped on day 0 and maintaining and operating that software on days 1, 2, and beyond.\n\n## What's next?\n\nA trend that is harder to quantify is the concept of [observablity](/blog/software-developer-changing-role/) and growing trends toward more open communities. The concept of service mesh, Kubernetes at the edge, and the operator experience all play into observability, but I suspect we'll see more discussion of it in the coming year. Also the acceleration of [5G technology](/blog/how-tomorrows-tech-affects-sw-dev/) will impact all computing at the edge - Kubernetes or not. Beyond 2021, trends in [AI in software development](/blog/ai-in-software-development/) may accelerate changes to how we all interact. What trends do you think the CNCF missed in outlining things to watch in 2021? If you have a strong opinion, I'd love to hear about it on [Twitter](https://twitter.com/twitter).\n",[727,9,855],"security",{"slug":857,"featured":6,"template":688},"cncf-five-technologies-to-watch-in-2021","content:en-us:blog:cncf-five-technologies-to-watch-in-2021.yml","Cncf Five Technologies To Watch In 2021","en-us/blog/cncf-five-technologies-to-watch-in-2021.yml","en-us/blog/cncf-five-technologies-to-watch-in-2021",{"_path":863,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":864,"content":870,"config":880,"_id":882,"_type":13,"title":883,"_source":15,"_file":884,"_stem":885,"_extension":18},"/en-us/blog/configuring-your-cluster-with-kubernetes-integration",{"title":865,"description":866,"ogTitle":865,"ogDescription":866,"noIndex":6,"ogImage":867,"ogUrl":868,"ogSiteName":675,"ogType":676,"canonicalUrls":868,"schema":869},"Heroes journey: Working with GitLab's Kubernetes agent","A tutorial on deploying and monitoring an application in Kubernetes without leaving GitLab.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749682342/Blog/Hero%20Images/treasure.jpg","https://about.gitlab.com/blog/configuring-your-cluster-with-kubernetes-integration","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab Heroes Unmasked - How I became acquainted with the GitLab Agent for Kubernetes\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Jean-Philippe Baconnais\"}],\n        \"datePublished\": \"2022-06-08\",\n      }",{"title":871,"description":866,"authors":872,"heroImage":867,"date":874,"body":875,"category":876,"tags":877},"GitLab Heroes Unmasked - How I became acquainted with the GitLab Agent for Kubernetes",[873],"Jean-Philippe Baconnais","2022-06-08","_A key to GitLab’s success is our vast community of advocates. Here at\nGitLab, we call these active contributors \"[GitLab\nHeroes](/community/heroes/).\" Each hero contributes to GitLab in numerous\nways, including elevating releases, sharing best practices, speaking at\nevents, and more. Jean-Phillippe Baconnais is an active GitLab Hero, who\nhails from France. We applaud his contributions, including leading community\nengagement events. Baconnais shares his interest in Kubernetes and explains\nhow to deploy and monitor an application in Kubernetes without leaving\nGitLab._ \n\n\nSince 2007, I’ve been a developer. I’ve learned a lot of things about\ncontinuous integration, deployment, infrastructure, and monitoring. In both\nmy professional and personal time, my favorite activity remains software\ndevelopment. After creating a new application with multiple components, I\nwanted to deploy it on Kubernetes, which has been really famous over the\nlast few years. This allows me to experiment on this platform. This\nannounces a lot of very funny things. I know some terms, I used them in\nproduction for five years. But as a user, Kubernetes Administration is not\nmy “cup of tea” 😅.\n\n\n## My first deployment in Kubernetes\n\n\nWhen I decided to deploy an application on Kubernetes, I wasn’t sure where\nto start until I saw, navigating in my project in GitLab, a menu called\n“Kubernetes.\" I wanted to know what GitLab was hiding behind this. Does this\nfeature link my project’s sources to a Kubernetes cluster? I used the credit\noffered by Google Cloud to discover and test this platform. \n\n\nDeploying my application on Kubernetes was easy. I wrote [a blog\npost](https://dev.to/jphi_baconnais/deploy-an-quarkus-application-on-gke-with-gitlabci-lgp)\nin 2019 describing how I do this, or rather, how GitLab helped me to create\nthis link so easily. In this blog post I will explain further and talk about\nwhat’s changed since then.\n\n\nBehind the “Kubernetes” menu, GitLab helps you integrate Kubernetes into\nyour project. You can create, from GitLab, a cluster on Google Cloud\nPlatform (GCP), and Amazon Web Services (AWS). If you already have a cluster\non this platform or anywhere else, you can connect to it. You just need to\nspecify the cluster name, Kubernetes API UR, and certificate.\n\n\n![Connect\ncluster](https://about.gitlab.com/images/blogimages/baconcreatecluster.png){:\n.shadow}\n\n\nGitLab is a DevOps platform and in the list of DevOps actions, there is the\nmonitoring part. \n\n\n![Chart of GitLab\nstages](https://about.gitlab.com/images/blogimages/baconstreamline.png){:\n.shadow}\n\n\nGitLab deploys an instance of Prometheus to get information about your\ncluster and facilitate the monitoring of your application.\n\n\nFor example, you can see how many pods are deployed and their states in your\nenvironment. You can also view some charts and information about your\ncluster, like memory and CPU available. All these metrics are available by\ndefault without changing the application of your cluster. We can also read\nthe logs directly in GitLab. For a developer, it’s great to have all this\ninformation on the same tool and this allows us to save time. \n\n\n![Pod\ndeployment](https://about.gitlab.com/images/blogimages/baconhealth.png){:\n.shadow}\n\n\n\n## A new way to integrate Kubernetes\n\n\nEverything I explained in the previous chapter doesn’t quite exist anymore.\nThe release of GitLab 14.5 was the beginning of a revolution. The Kubernetes\nintegration with certificates has limitations on security and many issues\nwere created. GitLab teams worked on a new way to rely on your cluster. And\nin Version 14.5, the [GitLab Agent for\nKubernetes](https://docs.gitlab.com/ee/user/clusters/agent/) was released! \n\n\n## GitLab Agent for Kubernetes\n\n\nGitLab Agent for Kubernetes is a new way to connect to your cluster. This\nsolution is easy to explain: An agent installed on your cluster communicates\nwith your GitLab instance with [gRPC](https://grpc.io/) protocol. Your agent\noffers you useful GitOps features I will explain later. The next picture\nshows you the GitLab Agent for Kubernetes architecture (from GitLab). \n\n\n![GitLab Agent for Kubernetes flow\nchart](https://about.gitlab.com/images/blogimages/baconkubernetesflowchart.png){:\n.shadow}\n\n\n### GitOps defined\n\n\nLet’s quickly define the term “[GitOps](/topics/gitops/)”: It’s a way to\nmanage your infrastructure as code, in a Git project. For me, there are two\naspects in GitOps: “pull” and “push” mode. \n\n\n- Push mode is when your Git project activates the upgrade of your\ninfrastructure following a change. \n\n- Pull mode is when your infrastructure verifies without interruption of\nyour Git project and applies changes automatically.\n\n\nAnd GitLab chose the latter mode for their solution of GitLab Agent for\nKubernetes. Indeed, your agent available on your cluster will check\nfrequently if your project changes. The gRPC protocol is great to respect\nthis intent. When you push a modification on your project, agents detect it\nautomatically, and then your cluster upgrades.\n\n\n### How the GitLab Agent for Kubernetes works\n\n\nThere are some actions to do to install and have a GitLab Agent for\nKubernetes available on your project. \n\n\nFirst, if you create a new project on GitLab, you can use the template\n“Management cluster,” which allows the initialization of files. These files\nallow you to have examples of: \n\n- a declaration of an agent\n\n- a list of starter kits to install DevOps tools\n\n\nGitLab is a DevOps platform that wants to help you to configure all steps of\nthe lifecycle of your project. You can find the configuration of tools like\nPrometheus, Sentry, Ingress, etc. I will detail this later.\n\n\n### The evolution of GitLab Agent for Kubernetes\n\n\nBefore explaining more details about this agent, you have to know one thing.\nThis product is in constant evolution and your feedback is welcome in [this\nissue](https://gitlab.com/gitlab-org/gitlab/-/issues/342696#note_899701396)\nto improve it. The roadmap is available and each version gives some\ninformation about its evolution.\n\n\n## How to use GitLab Agent for Kubernetes\n\n\nCreating an agent is simple. You have to create a file in the directory\n.gitlab/agents/\u003Cnameofyouragent>/config.yaml. \n\n\n\n![Connect\ncluster](https://about.gitlab.com/images/blogimages/baconstructure.png)\n\n\n\nThe default configuration should contain:\n\n- your project id, represented by your \u003Cuser or group>/project\n\n- a namespace by default to deploy applications if it’s not present in your\nyaml files\n\n- path of your yaml file to apply. This can be a specific file, a directory,\nor a pattern of files\n\n- level of debug\n\n\n```\n\n\ngitops:\n manifest_projects:\n - id: xxxxx/demo-gitlab-kubernetes-cluster-management\n   default_namespace: gitlab-kubernetes-agent-demo\n   paths:\n   - glob: 'deploy.yaml'\nobservability:\n logging:\n   level: debug\n\n```\n\n\nYou can add security to this configuration file with the “ci_access”\nproperty. For example, it allows developers to avoid destroying the\nKubernetes infrastructure 😅. I didn’t explore in detail this part yet. \n\n\nAll configuration options are available on [this reference\npage](https://docs.gitlab.com/ee/user/clusters/agent/gitops.html#gitops-configuration-reference). \n\n\nAfter creating and pushing your file in your project, you have to register\nyour agent. And this action takes two seconds on the GitLab UI. \n\n\n![Add an\nagent](https://about.gitlab.com/images/blogimages/baconaddanagent.png){:\n.shadow}\n\n\nOn the next step, GitLab gives you the Docker command to install your agent\non your cluster. For example:\n\n\n```\n\n\ndocker run --pull=always --rm \\\n    registry.gitlab.com/gitlab-org/cluster-integration/gitlab-agent/cli:stable generate \\\n    --agent-token=\u003Cyour token generated by GitLab> \\\n    --kas-address=wss://kas.gitlab.com \\\n    --agent-version stable \\\n    --namespace gitlab-kubernetes-agent | kubectl apply -f -\n\n```\n\nYou can copy-paste this command on your cluster and your agent will be\navailable in a Kubernetes namespace. You can see on the GitLab UI that the\nlink with the agent is successful.\n\n\n![Link with agent\nsuccess](https://about.gitlab.com/images/blogimages/baconagentk.png){:\n.shadow}\n\n\n\nYou can also verify this connection in logs of agent container: \n\n\n```\n\n\n{\"level\":\"debug\",\"time\":\"2022-xx-xxT14:11:57.517Z\",\"msg\":\"Handled a\nconnection successfully\",\"mod_name\":\"reverse_tunnel\"}  \n\n\n```\n\n\n### GitLab cluster management \n\n\nGitLab is a DevOps platform and uses tiers of applications to manage all the\nsteps of a modern DevOps pipeline. The “Monitor” part in GitLab is based on\nsome tools such as\n[Prometheus](https://prometheus.io/docs/visualization/grafana/),[Sentry](https://sentry.io/),\n[Vault](https://www.vaultproject.io/), etc. To help you, GitLab created the\ntemplate [GitLab Cluster Management](\nhttps://gitlab.com/gitlab-org/project-templates/cluster-management), which\ngives you a basic configuration of these tools.\n\n\nTo install these tools, a `.gitlab-ci.yml` file is created and defines a job\nto deploy them with helmfile configuration. All these tools, contained in\nthe directory named “applications,” can be overridden or customized in\n`values.yaml` file. \n \nAnd for my experimentation, I used this template and applied a small change\nto have an external IP address for the Prometheus instance. After\nregistering this external IP in GitLab (Menu Settings > Monitor > Alerts),\nthe Monitor menu has data. We can check information about any pods deployed\non my cluster. \n\n\n![Agent\ngraph](https://about.gitlab.com/images/blogimages/baconagentgraph.png){:\n.shadow}\n\n\n## The GitOps aspect \n\n\nThe GitOps aspect can be verified quickly. If you choose to specify one\nmanifest file defining an application deployment, a modification on this\nfile implies an automatic deployment on your cluster. Without CI! This\nallows us to have a faster deployment than if we passed with a pipeline. The\nnew features or fixes will be deployed faster on your infrastructures. And\nif you use the free version of GitLab, your deployment will not count in\nyour CI quota. \n\n\nAfter a commit, the agent detects it and we can see the commit id in the\nagent logs.\n\n\n```\n\n{\"level\":\"info\",\"time\":\"2022-04-11T15:22:44.049Z\",\"msg\":\"Synchronizing\nobjects\",\"mod_name\":\"gitops\",\"project_id\":\"jeanphi-baconnais/demo-gitlab-kubernetes-cluster-management\",\"agent_id\":12804,\"commit_id\":\"e2a82fe6cc82fa25e8d5a72584774f4751407558\"}\n\n\n```\n\n\n## CI/CD tunnel\n\n\nAnother feature that comes with the GitLab Agent for Kubernetes is the CI/CD\ntunnel. Your agent facilitates the interaction with your cluster. You just\nhave to define a KUBE_CONTEXT variable referencing the path of your agent. \n\n\n```\n\nvariables:\n\nKUBE_CONTEXT: \"xxxxx/demo-gitlab-kubernetes-cluster-management:agentk\"\n\n\n```\n\n\nAnd actions on your cluster are available without secret configuration or\nanything else. If you want to execute `kubectl` commands, you can easily use\nthis job:\n\n\n```\n\n\ntest-cicd-tunnel:\n stage: test\n extends: [.kube-context]\n image:\n   name: bitnami/kubectl:latest\n   entrypoint: [\"\"]\n script:\n  - kubectl get ns\n when: manual\n\n```\n\n\n## What's next\n\n\nCurrently, GitLab Agent for Kubernetes doesn’t allow you to get information\nabout the state of pods on your cluster’s environment page.\n\n\n![Success](https://about.gitlab.com/images/blogimages/baconci.png){:\n.shadow}\n\n\nBut GitLab wants to offer the same level of service as the certificate\nintegration. So, check the roadmap ([in this\nissue](https://gitlab.com/groups/gitlab-org/-/epics/3329)) and the contents\nof each release. The template Cluster Management is in progress, too. Some\nissues will give new features for configuration tools.\n\n\nThis experience was so rewarding for me. I would deploy a project on Google\nCloud, and I discovered a new method. I saw this agent described in [GitLab\n14.5](/releases/2021/11/22/gitlab-14-5-released/) but I didn’t imagine the\nimpact it can have on a project. \n\n\nMy colleague [Eric Briand](https://twitter.com/eric_briand) and I had the\nopportunity to speak about this subject at [Malt Academy\nsessions](https://www.malt-academy.com/) and [Meetup GitLab\nFrance](https://www.meetup.com/GitLab-Meetup-France/events/283917115). I\nwill continue to experiment with this agent and try different options for\nthis wonderful product! \n\n\n**This blog post and linked pages contain information related to upcoming\nproducts, features, and functionality. It is important to note that the\ninformation presented is for informational purposes only. Please do not rely\non this information for purchasing or planning purposes. As with all\nprojects, the items mentioned in this video/blog post and linked pages are\nsubject to change or delay. The development, release, and timing of any\nproducts, features, or functionality remain at the sole discretion of GitLab\nInc.**\n\n\nCover image by [Ashin K Suresh](https://unsplash.com/photos/mkxTOAxqTTo) on\nUnsplash.\n\n{: .note}\n","open-source",[9,268,707,878,879],"growth","contributors",{"slug":881,"featured":6,"template":688},"configuring-your-cluster-with-kubernetes-integration","content:en-us:blog:configuring-your-cluster-with-kubernetes-integration.yml","Configuring Your Cluster With Kubernetes Integration","en-us/blog/configuring-your-cluster-with-kubernetes-integration.yml","en-us/blog/configuring-your-cluster-with-kubernetes-integration",{"_path":887,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":888,"content":894,"config":903,"_id":905,"_type":13,"title":906,"_source":15,"_file":907,"_stem":908,"_extension":18},"/en-us/blog/container-network-security-is-important",{"title":889,"description":890,"ogTitle":889,"ogDescription":890,"noIndex":6,"ogImage":891,"ogUrl":892,"ogSiteName":675,"ogType":676,"canonicalUrls":892,"schema":893},"How to secure your Kubernetes pods using GitLab Container Network Security","We help you get started with securing your Kubernetes cluster using Cilium, a GitLab-managed application.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681687/Blog/Hero%20Images/diane-helentjaris-TYk0YQbog9g-unsplash.jpg","https://about.gitlab.com/blog/container-network-security-is-important","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to secure your Kubernetes pods using GitLab Container Network Security\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Fernando Diaz\"}],\n        \"datePublished\": \"2020-10-23\",\n      }",{"title":889,"description":890,"authors":895,"heroImage":891,"date":897,"body":898,"category":855,"tags":899},[896],"Fernando Diaz","2020-10-23","{::options parse_block_html=\"true\" /}\n\n\nKubernetes does not come secure out of the box. There is a lot of\nconfiguration needed\n\nto achieve a secure cluster. One important security configuration to\nconsider is how pods\n\ncommunicate with each other. This is where Network Policies come into play,\nmaking sure that\n\nyour pods are not exchanging data with unknown or malicious sources, which\ncan compromise\n\nyour cluster.\n\n\n[Network\nPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)\nare rules on how pods can communicate\n\nwith other pods as well as endpoints. They are pretty much a firewall for\nyour internal cluster network.\n\n\nGitLab provides Container Network Security using\n[Cilium](https://cilium.io/) as a [GitLab-managed\napplication](https://docs.gitlab.com/ee/user/clusters/applications.html#install-cilium-using-gitlab-cicd).\n\nCilium is a CNI [network\nplugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)\nfor Kubernetes that can be used to implement support for Network Policies.\n\n\nThe video below provides an introduction on how to easily implement Network\nPolicies from GitLab,\n\nas well as a demo on testing Network Policies:\n\n\n\u003C!-- blank line -->\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/45Q__T42ZMA\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\u003C!-- blank line -->\n\n\n## Network Policies in action\n\n\nThere are many different ways of configuring Network Policies within your\nKubernetes cluster. You can configure\n\nthe `ingress from` as well as the `egress to` traffic. There are four kinds\nof selectors\n\nwhich can be used to configure traffic between pods:\n\n\n- podSelector: Selects provided pods in the same namespace\n\n- namespaceSelector: Selects all pods on given namespace\n\n- podSelector & namespaceSelector: Selects provided pods in given namespace\n\n- ipBlock: Blocks external [IP\nCIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) ranges\nprovided\n\n\nMore information on the behavior of \"to\" and \"from\" selectors can be found\nin the [Kubernetes\ndocumentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors).\n\n\nBelow is an example of a Network Policy that only allows ingress traffic\n\nto pod with label `app: \"notes\"` from pods with label `access: \"true\"`.\n\n\n```yaml\n\napiVersion: networking.k8s.io/v1\n\nkind: NetworkPolicy\n\nmetadata:\n  name: access-notes\nspec:\n  podSelector:\n    matchLabels:\n      app: \"notes\"\n  ingress:\n  - from:\n    - podSelector:\n        matchLabels:\n          access: \"true\"\n```\n\n\n## Installing Cilium as a GitLab-managed application\n\n\nCilium is provided by GitLab as a managed application, meaning\n\nthat GitLab installs and upgrades Cilium for you. There is no need\n\nto worry about how to get Cilium up and running. Cilium as well as your\nNetwork\n\nPolicies can be configured as needed.\n\n\nIn order to install and configure Cilium as a GitLab managed application,\nyou can follow the steps provided in\n\nthe [GitLab cluster applications\ndocumentation](https://docs.gitlab.com/ee/user/clusters/applications.html#install-cilium-using-gitlab-cicd).\n\nThis sample project [Simply Simple\nNotes](https://gitlab.com/gitlab-examples/security/simply-simple-notes), is\nconfigured to use Cilium. It will install Cilium on the Kubernetes cluster\nassociated with the project.\n\n\n[This\nguide](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/)\ncan be used to test your Network Policies once Cilium has been installed.\n\n\n## Threat monitoring dashboard\n\n\nBy default Cilium installs with Hubble, a monitoring daemon which collects\npacket flow metrics per namespace. These\n\nmetrics are sent to the GitLab [Threat Monitoring\ndashboard](https://docs.gitlab.com/ee/user/application_security/threat_monitoring/).\n\n\n![threat monitoring packet\nmetrics](https://about.gitlab.com/images/blogimages/container-network-security/packet-metrics.png)\n\nPacket Metrics displayed in the Threat Management dashboard\n\n{: .note.text-center}\n\n\nThe packet flow metrics collected are:\n\n- The total number of inbound and outbound packets for the given time period\n\n- The proportion of packets dropped according to the configured policies\n\n- The average rate per-second of forwarded and dropped packets for the\nrequested time interval\n\n\nWithin the Threat Monitoring dashboard, you can also view and configure the\nNetwork Policies in your project. This makes it easy to navigate\n\nyour container network configuration in one interface.\n\n\n![threat monitoring Network\nPolicies](https://about.gitlab.com/images/blogimages/container-network-security/network-policy.png)\n\nNetwork Policies displayed in the Threat Management dashboard\n\n{: .note.text-center}\n\n\nNetwork Policies can also be created and edited through an intuitive UI. You\ncan just select the network rules you wish to use and the YAML will be\nautomatically created and applied to your cluster. This eliminates the need\nto edit the complicated YAML structure for Network Policies directly, and\nallows you to make sure the correct rules are being applied without\nconfusion.\n\n\nNetwork Rules can be created using the following rule types:\n\n- Labels\n\n- Entities\n\n- IP/CIDR\n\n- DNS\n\n- Level 4\n\n\n![threat monitoring policy\ncreation](https://about.gitlab.com/images/blogimages/container-network-security/policy-creation.png)\n\nPolicy being created in the Threat Management dashboard\n\n{: .note.text-center}\n\n\n## Learn more about GitLab Security\n\n\nI hope this blog can help get you started with Network Policies in\nKubernetes. Check out some other\n\nways GitLab can help with security.\n\n\n- [How application security engineers can use GitLab to secure their\nprojects](/blog/secure-stage-for-appsec/)\n\n- [How to capitalize on GitLab Security tools with external\nCI](https://docs.gitlab.com/ee/integration/jenkins.html)\n\n- [What you need to know about Kubernetes\nRBAC](/blog/understanding-kubernestes-rbac/)\n\n\nCover image by [Diane Helentjaris](https://unsplash.com/@dhelentjaris) on\n[Unsplash](https://unsplash.com/photos/TYk0YQbog9g)\n\n{: .note}\n",[855,900,9,901,902],"careers","agile","testing",{"slug":904,"featured":6,"template":688},"container-network-security-is-important","content:en-us:blog:container-network-security-is-important.yml","Container Network Security Is Important","en-us/blog/container-network-security-is-important.yml","en-us/blog/container-network-security-is-important",{"_path":910,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":911,"content":917,"config":924,"_id":926,"_type":13,"title":927,"_source":15,"_file":928,"_stem":929,"_extension":18},"/en-us/blog/container-security-in-gitlab",{"title":912,"description":913,"ogTitle":912,"ogDescription":913,"noIndex":6,"ogImage":914,"ogUrl":915,"ogSiteName":675,"ogType":676,"canonicalUrls":915,"schema":916},"Get better container security with GitLab: 4 real-world examples","Containers are increasingly popular – and increasingly vulnerable. Using\nfour threat scenarios, we step through how GitLab's built-in security\nfeatures will make containers safer.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749667094/Blog/Hero%20Images/container-security.jpg","https://about.gitlab.com/blog/container-security-in-gitlab","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Get better container security with GitLab: 4 real-world examples\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Thiago Figueiró\"}],\n        \"datePublished\": \"2020-07-28\",\n      }",{"title":912,"description":913,"authors":918,"heroImage":914,"date":920,"body":921,"category":855,"tags":922},[919],"Thiago Figueiró","2020-07-28","The number of web applications hosted in containers grows every day, but\ndata from our 2020 Global DevSecOps Survey showed a majority of companies\ndon't have a [container\nsecurity](/topics/devsecops/beginners-guide-to-container-security/) strategy\nin place. This post shows examples of how GitLab can help increase the\nsecurity of such applications and their hosting environment. We focus on web\napplications, but most of the security features described in this post apply\nto any containerized apps.\n\n\nDetailed descriptions and examples of the tactics and techniques mentioned\nin this post can be found in the [MITRE ATT&CK\nMatrix](https://attack.mitre.org/).\n\n\n## Threat Models\n\n\nTo help with our scenarios, we're taking two tactics from the MITRE ATT&CK\nmatrix: [Initial Access](https://attack.mitre.org/tactics/TA0001/) and\n[Execution](https://attack.mitre.org/tactics/TA0002/). There are similar\ncategories in other frameworks, such as the [cyber kill\nchain](https://en.wikipedia.org/wiki/Kill_chain#The_cyber_kill_chain).\n\n\n### Initial Access\n\n\nIn this phase, an attacker is attempting to establish access to your\ncomputing resources through different techniques. A single one might be\nsufficient for the attack to succeed but, quite often, a successful\ncompromise relies on a few different methods.\n\n\nThe diagram below shows three examples of how an attacker can gain access to\na container hosting an application accessible from the Internet.\n\n\n```mermaid\n\ngraph LR\n  classDef default fill:#FFFFFF,stroke:#0C7CBA;\n  classDef baddie fill:#ffd6cc,stroke:#991f00;\n\n  subgraph Kubernetes Cluster\n    subgraph Container\n      subgraph Application\n        Accounts[Valid\u003Cbr>Accounts]\n        click Accounts \"https://attack.mitre.org/techniques/T1078\"\n        style Accounts fill:#FFFFFF,stroke:#0C7CBA;\n\n        Dependencies[External\u003Cbr>Dependencies]\n        click Dependencies \"https://attack.mitre.org/techniques/T1195\"\n        style Dependencies fill:#FFFFFF,stroke:#0C7CBA;\n\n        Service[Network\u003Cbr>Service]\n        click Service \"https://attack.mitre.org/techniques/T1190\"\n        style Service fill:#FFFFFF,stroke:#0C7CBA;\n      end\n    style Application fill:#fff,stroke:#cccccc;\n  end\n  style Container fill:#f0f0f5,stroke:#cccccc;\n  end\n\n  Attacker -- Supply chain attack --> Dependencies\n  Attacker -- Exploit --> Service\n  Attacker -- Exposed Credentials --> Accounts\n\n  class Attacker baddie\n\n```\n\n\nThere are different ways threat vectors can be exploited but, to demonstrate\nGitLab's features, let's pick some specific examples of how it can happen.\nNone of these are made-up by the way; they have all happened - and continue\nto happen - in the wild.\n\n\n1. **Exposed Credentials**. Someone with legitimate access to your systems\nsaved valid account credentials in an application's code repository.\n\n1. **Supply Chain Attack**. There's no apparent vulnerability in the\napplication itself but the attacker managed to introduce one in an external\ndependency utilized by the application, so now it, too, is vulnerable.\n\n1. **Exploit**. The application is vulnerable to command execution because\nit doesn't validate user input properly.\n\n\n### Execution\n\n\nAt this point, the attacker has:\n\n\n1. Acquired credentials that allow access to most areas of the web\napplication.\n\n1. Discovered that the application is vulnerable to remote code execution.\n\n1. Introduced a different vulnerability to the application via an external\ndependency.\n\n\nThe next objective is to use one or more of these assets to execute\ninstructions of their choice on the target systems. The diagram below shows\ndifferent ways this can be accomplished.\n\n\n```mermaid\n\ngraph LR\n  classDef default fill:#FFFFFF,stroke:#0C7CBA;\n  classDef cl-container fill:#f0f0f5,stroke:#cccccc;\n  classDef baddie fill:#ffd6cc,stroke:#991f00;\n\n  subgraph Infrastructure\n    subgraph Container\n      Application\n      Others\n      Exploit[Executable Exploit]\n      Shell[Reverse Shell]\n\n      Application -- Deliver, Execute --> Exploit\n      Application -- Execute --> Shell\n      Others[Other\u003Cbr>Techniques] -- Deliver, Execute --> Exploit\n      Exploit -- Modify --> Filesystem\n      Exploit -- Spawn --> Shell\n    end\n\n    subgraph Containers\n      Internal(Internal Service)\n    end\n    Exploit -- Lateral Movement --> Internal\n    class Container,Containers cl-container\n  end\n\n  Shell -- Internet --> Attacker\n\n  class Attacker,Exploit,Others,Shell baddie\n\n```\n\n\nAgain we're choosing scenarios that fit our examples.\n\n\n1. **Deliver**, **Execute**. The attacker has an exploit that they would\nlike to deliver and execute.\n   1. The vulnerable application is tricked into writing arbitrary content to the container file system.\n   1. The vulnerable application is tricked into executing arbitrary commands.\n   1. The external dependency provides another, unspecified way to deliver and execute malicious code.\n1. **Spawn**. Execution of malicious code spawns a [reverse\nshell](https://en.wikipedia.org/wiki/Shell_shoveling) that connects to the\nattacker and waits for commands.\n\n1. **Modify**. The malicious code modifies configurations on the container's\nfile system that further exposes the container to attack, or perhaps,\nescalates the attacker's privileges.\n\n1. **Lateral Movement**. The attacker's exploit probes other hosts in the\ncontainer's network, managing to find and access an internal service that\nwasn't exposed to the Internet in the first place.\n\n\n## How GitLab Helps Stop These Attacks\n\n\nAs part of the [Secure](https://about.gitlab.com/direction/secure/) and\n[Protect](https://about.gitlab.com/direction/govern/) Stages, GitLab\ndelivered and continues to improve features that minimize your security risk\nand help you [shift security\nleft](/blog/efficient-devsecops-nine-tips-shift-left/).\n\n\nLet's see how these GitLab features would prevent and detect the attacks\ndescribed in our example scenarios.\n\n\n### Initial Access\n\n\nBy [shifting left](/blog/toolchain-security-with-gitlab/), all techniques in\nthis phase could be detected even before the application was deployed to an\nInternet-accessible environment.\n\n\nThis is done by taking advantage of [GitLab\nSecure](https://docs.gitlab.com/ee/user/application_security/) features as\npart of an application's [Continuous Integration\n(CI)](https://docs.gitlab.com/ee/ci/) builds.\n\n\n#### Exposed Credentials\n\n\nA [Secret\nDetection](https://docs.gitlab.com/ee/user/application_security/secret_detection/)\nscan reports several types of secrets accidentally or intentionally\ncommitted to your code repository, allowing the merge request author to\nremove and invalidate the exposed secret before it can be used in an attack.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/W2tjcQreDwQ\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\n#### Supply Chain Attack\n\n\nOne type of supply chain attack is against the open-source code libraries\nused by your application. [Dependency\nScanning](https://docs.gitlab.com/ee/user/application_security/dependency_scanning/)\nreports known vulnerabilities in dependencies used by your application.\nScanners for multiple languages are available and kept up-to-date with a\ndatabase of known vulnerabilities so that potential vulnerabilities are\nidentified and reported as part of your CI builds.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/uGhS2Wh6PBE\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\n#### Exploit\n\n\nFor the examples given in this category, there are two ways GitLab mitigates\nand prevents the described attacks. The first is [Dynamic Application\nSecurity Testing\n(DAST)](https://docs.gitlab.com/ee/user/application_security/dast/), another\nscanner that can be run as a CI job. The second way is through the GitLab\nWeb Application Firewall (WAF), part of our [Protect\nStage](/handbook/engineering/development/sec/govern/).\n\n\nBecause DAST executes against a running deployment of your application, it\ndetects potential problems that can't be discovered by merely analyzing an\napplication's source code. In our example, the attacker relies on an input\nvalidation weakness in the application that might be identified and reported\nas a [server side code\ninjection](https://www.zaproxy.org/docs/alerts/90019/) by DAST.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/wxcEiuUasyM\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\nEffective security is implemented in layers and, should DAST fail to\nidentify a vulnerability, we can sometimes rely on WAF to block malicious\nrequests to the application.\n\n\nA WAF can monitor and block web traffic based on a set of pre-configured\nrules that determine if a request is potentially malicious or a response\nindicates compromised security. GitLab's WAF comes with the [OWASP\nModSecurity Core Rule\nSet](https://owasp.org/www-project-modsecurity-core-rule-set/) installed by\ndefault, which will successfully prevent various forms of [shell\ninjection](https://github.com/coreruleset/coreruleset/blob/7776fe23f127fd2315bad0e400bdceb2cabb97dc/rules/REQUEST-932-APPLICATION-ATTACK-RCE.conf#L415)\nand [SQL\ninjection](https://github.com/coreruleset/coreruleset/blob/v3.4/dev/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf)\nattacks.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/03n4C60YnDQ\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\n### Execution\n\n\nIn case the previous counter-measures have failed to prevent initial access\nto our system, we have another layer of defense against attacks. Even after\na vulnerable application is deployed to a publicly accessible environment,\nwe can still detect and prevent cyberattacks.\n\n\n#### Detection\n\n\nIn our examples, the attacker modified the container filesystem and created\nnew processes by executing malicious code. These actions can be detected and\nlogged, as shown in the demonstration video below. Additionally, the logs\ncan be sent to a SIEM with Gitlab's [SIEM\nintegration](https://docs.gitlab.com/ee/update/removals.html), enabling a\nsecurity operations team to be notified of the suspicious activity within\nseconds of it happening.\n\n\nAs part of our [Container Host\nSecurity](https://about.gitlab.com/direction/govern/) features, you can\n[enable logging of system\ncalls](https://docs.gitlab.com/ee/update/removals.html) on any containers in\nyour [Kubernetes\ncluster](https://docs.gitlab.com/ee/user/project/clusters/).\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/WxBzBz76FxU\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\n#### Prevention\n\n\nGitLab is able to prevent all attack examples described earlier: Lateral\nMovement, Reverse Shell, filesystem modification, and malicious code\nexecution attacks.\n\n\nBy deploying a [Network\nPolicy](https://docs.gitlab.com/ee/topics/autodevops/stages.html#network-policy)\nto your Kubernetes cluster, the compromised container would not be allowed\nto create an outbound connection to the attacker through the Internet.\nSimilarly, the Executable Exploit would be prevented from probing other pods\nin a cluster network due to policy restrictions.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/pgUEdhdhoUI\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\nTo prevent filesystem modification and restrict code execution, [Pod\nSecurity\nPolicies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)\n[are supported](https://docs.gitlab.com/ee/update/removals.html) as part of\nour Container Host Security features.\n\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/fPy53c3rbAs\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n\n## Conclusion\n\n\nThe number of container-based applications will continue to grow along with\nthe necessity to secure them, and our new [Container Host\nSecurity](/direction/govern/) category is part of the GitLab strategy to\nenable organizations to proactively protect their cloud-native environments.\n\n\nIn this blog post, we highlighted only a few of the DevSecOps features\ncurrently available in GitLab. For additional existing and upcoming\nfunctionality, please visit the product direction pages for\n[Protect](/direction/govern/) and [Secure](/direction/secure/).\n\n\nCover image by [JJ Ying](https://unsplash.com/@jjying) on\n[Unsplash](https://unsplash.com).\n\n{: .note}",[9,855,923],"demo",{"slug":925,"featured":6,"template":688},"container-security-in-gitlab","content:en-us:blog:container-security-in-gitlab.yml","Container Security In Gitlab","en-us/blog/container-security-in-gitlab.yml","en-us/blog/container-security-in-gitlab",{"_path":931,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":932,"content":938,"config":944,"_id":946,"_type":13,"title":947,"_source":15,"_file":948,"_stem":949,"_extension":18},"/en-us/blog/containers-kubernetes-basics",{"title":933,"description":934,"ogTitle":933,"ogDescription":934,"noIndex":6,"ogImage":935,"ogUrl":936,"ogSiteName":675,"ogType":676,"canonicalUrls":936,"schema":937},"Kubernetes & containers, and where cloud native fits in – the basics","Brush up on your understanding of these concepts key to modern development.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749671296/Blog/Hero%20Images/containers-kubernetes-basics.jpg","https://about.gitlab.com/blog/containers-kubernetes-basics","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Kubernetes & containers, and where cloud native fits in – the basics\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Rebecca Dodd\"}],\n        \"datePublished\": \"2017-11-30\",\n      }",{"title":933,"description":934,"authors":939,"heroImage":935,"date":941,"body":942,"category":683,"tags":943},[940],"Rebecca Dodd","2017-11-30","\n\nWe throw around terms like Kubernetes, containers, and cloud native with some abandon, but sometimes take it for granted that everyone knows what's what. So here we go...\n\n\u003C!-- more -->\n\n## Container explainer\n\nA container is a method of operating system-based virtualization that allows\nyou to securely run an application and its dependencies independently without\nimpacting other containers or the operating system.\n\nBefore containers, it was common to use virtual machines (VMs) to provide a safe, sandbox environment in which to test software, within a computer. A container works much like a virtual machine except that, instead of packaging\nyour code with an operating system, it is run as a Linux process\ninside of the kernel. This means that each container only contains the code and dependencies needed to run that specific application, making them smaller and faster to run.\n\n![Containers vs virtual machines vs bare metal](https://about.gitlab.com/images/blogimages/containers-vm-bare-metal.png){: .medium.center}\n\n*\u003Csmall>Containers retain the same repeatability factor as virtual machines, but are much faster and use fewer resources to run.\u003C/small>*\n\n## Kuber... what?\n\nKubernetes is primarily a container scheduler – an open source platform designed to automate your management of application containers, from deploying and scaling to operating.\n\nWhile virtualization technology statically partitions your servers into smaller VMs, Kubernetes allows you to partition as you go, depending on how much or little resources are needed at the time, scaling up and down as necessary. You can respond quickly and efficiently to customer demand while limiting hardware usage and minimizing disruption to feature rollouts. With container schedulers, the focus shifts from the machine to the service – the machine becomes an ephemeral, disposable element.\n\nWhat's more, using containers in this way means they are decoupled from the host filesystem and underlying infrastructure, making them portable across clouds and operating systems.\n\n## Containers + Kubernetes \u003Ci class=\"fas fa-arrow-right\" aria-hidden=\"true\">\u003C/i> cloud native\n\nWhich brings us to [cloud native development](/topics/cloud-native/). Cloud native applications embrace a new approach to building and running applications that takes full advantage of the cloud computing model and container schedulers such as Kubernetes.\n\nNot to be confused with running traditional applications in the cloud, cloud native means that applications are purpose-built for the cloud, and consist of loosely coupled services. Applications are re-architected for running in the cloud – shifting the focus away from the machine to the service instead. Cloud native acknowledges that the cloud is about more than just who manages your servers – it is the next step in digital transformation.\n\nBy building applications that can run on any cloud, right out of the box, you’re free to migrate and distribute across vendors in line with your budget and business priorities. You also free up developer time – they don’t have to write code to run and scale across a range of cloud infrastructures, so they can focus on improvements and new features.\n\nSound good? We think so! Visit [about.gitlab.com/kubernetes](/solutions/kubernetes/) to learn more about how GitLab and Kubernetes can get you to cloud native nirvana.\n\n[Cover image](https://unsplash.com/@guibolduc?photo=uBe2mknURG4) by [Guillaume Bolduc](https://unsplash.com/@guibolduc) on Unsplash\n{: .note}\n",[9,727],{"slug":945,"featured":6,"template":688},"containers-kubernetes-basics","content:en-us:blog:containers-kubernetes-basics.yml","Containers Kubernetes Basics","en-us/blog/containers-kubernetes-basics.yml","en-us/blog/containers-kubernetes-basics",{"_path":951,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":952,"content":958,"config":963,"_id":965,"_type":13,"title":966,"_source":15,"_file":967,"_stem":968,"_extension":18},"/en-us/blog/delta-cloud-native",{"title":953,"description":954,"ogTitle":953,"ogDescription":954,"noIndex":6,"ogImage":955,"ogUrl":956,"ogSiteName":675,"ogType":676,"canonicalUrls":956,"schema":957},"How Delta made the journey to cloud native","Delta tossed aside the rule book to go cloud native and achieve workflow portability. Here's how it's done.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678376/Blog/Hero%20Images/deltacommit.jpg","https://about.gitlab.com/blog/delta-cloud-native","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How Delta made the journey to cloud native\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Valerie Silverthorne\"}],\n        \"datePublished\": \"2019-10-17\",\n      }",{"title":953,"description":954,"authors":959,"heroImage":955,"date":960,"body":961,"category":876,"tags":962},[680],"2019-10-17","\n_Delta Air Lines is the top domestic carrier in the United States, flying over 200 million people a year to more than 300 destinations in 50 countries. Delta is in a highly competitive industry with a lot of moving parts and that’s why, in 2016, the company began a sweeping digital transformation journey. At [GitLab Commit in Brooklyn](/blog/wrapping-up-commit/), Jasmine James, IT manager, DevOps Center of Excellence at Delta, shared how the company journeyed to [cloud native](/topics/cloud-native/) while avoiding vendor lock-in._\n\nDelta’s primary goal was business agility, Jasmine says, and the plan was to get there using cloud native. “We'll do cloud native and then we'll get the business agility, we thought,” she says. “But at Delta, because we have such large, complex systems and a very mission-critical environment, it was not that easy at all.”\n\nTo start, Delta took a hard look at its existing environment and at ways it could be improved. Metrics-based process mapping made it clear the infrastructure was standing in the way of delivering value. A flexible architecture would also make it easier to have scalable and reliable workloads, she explains. The company’s existing tools wouldn’t work with cloud native, so Jasmine’s team set out to find tools that could provide version control, [continuous integration, and continuous delivery](/solutions/continuous-integration/) – the three areas the team considered the [MVP](https://www.techopedia.com/definition/27809/minimum-viable-product-mvp) to get the job done.\n\n## Stick with vowels\n\nThe team came up with an easy-to-remember acronym to describe the criteria used during the tool search: **AEIOU**. **A** is for applicability: Will the tool be applicable for the heavy Java and Linux users at Delta? **E** meant enterprise-ready because Delta needed tried and true maturity. **I** stands for integration, and Jasmine was quick to point out that in this case, it wasn’t about legacy integration but simply a matter of ensuring all the new tools worked well together. **O** is for overhead, which has particular meaning for Jasmine’s team since they manage all the development tools at Delta. “We had to ask ourselves how easy it would be to manage and administer tools for 5000 developers at Delta,” she says. And finally, **U** represents usefulness, which is another way of saying the team wanted to ensure it would choose the right building blocks that would work together.\n\nDelta’s first choice of tools was GitLab, followed by [Sonatype Nexus](https://www.sonatype.com/product-nexus-repository) and Jenkins for CI, Jasmine says. Today Delta is considering expanding its options for developers to also include [GitLab CI](/solutions/continuous-integration/).\n\n## Careful choices = concrete benefits\n\nThe careful thought process has already shown a number of concrete benefits, Jasmine says. Delta created an API to allow customers flying different legs using partner airlines to check in just one time. And the airline’s employees have enhanced decision support around weather events that help to minimize the impact of canceled flights.\n\nBut the benefits go further, Jasmine stresses. “We now have the ability to play the field,” she says. “We not only can leverage the best of breed features in the public cloud space, we also can pick and choose based on public cloud provider performance and cost. With the cost savings we have been able to do a lot (which means we can) fund more great features.”\n\nDelta’s also been able to offer what Jasmine calls a “first class developer experience” because programmers can leverage both the airline’s on premises [Open Shift](https://www.openshift.com) private cloud and scale to the public cloud as needed, all while using familiar programming languages and tools.\n\nJasmine’s take away: “Be you, be different, be great in cloud native. What that means is that although I’ve talked a lot about Delta’s journey, there is no one way to implement cloud native.”\n\nWatch all of Jasmine’s presentation:\n\n\u003Ciframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/zV_hFcxoN8I\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen>\u003C/iframe>\n\nCover image by [Angela Compagnone](https://unsplash.com/@angelacompagnone) on [Unsplash](https://unsplash.com/).\n{: .note}\n",[727,9,707,727],{"slug":964,"featured":6,"template":688},"delta-cloud-native","content:en-us:blog:delta-cloud-native.yml","Delta Cloud Native","en-us/blog/delta-cloud-native.yml","en-us/blog/delta-cloud-native",{"_path":970,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":971,"content":977,"config":985,"_id":987,"_type":13,"title":988,"_source":15,"_file":989,"_stem":990,"_extension":18},"/en-us/blog/deploying-application-eks",{"title":972,"description":973,"ogTitle":972,"ogDescription":973,"noIndex":6,"ogImage":974,"ogUrl":975,"ogSiteName":675,"ogType":676,"canonicalUrls":975,"schema":976},"Deploying apps to GitLab-managed Amazon EKS with Auto DevOps","A Kubernetes tutorial: Use GitLab AutoDevOps to deploy your applications to Amazon EKS.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749666959/Blog/Hero%20Images/gitlab-aws-cover.png","https://about.gitlab.com/blog/deploying-application-eks","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to deploy your application to a GitLab-managed Amazon EKS cluster with Auto DevOps\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Abubakar Siddiq Ango\"}],\n        \"datePublished\": \"2020-05-05\",\n      }",{"title":978,"description":973,"authors":979,"heroImage":974,"date":981,"body":982,"category":683,"tags":983},"How to deploy your application to a GitLab-managed Amazon EKS cluster with Auto DevOps",[980],"Abubakar Siddiq Ango","2020-05-05","\n\nDeploying an application onto Amazon EKS doesn't have to be painful. In fact, GitLab's [Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/) function makes it easy for developers to deploy applications from GitLab onto any cloud. In this tutorial, I break down how to deploy a simple ruby Hello, World application onto our GitLab-managed Amazon EKS cluster, which we created earlier ([read part one to learn how](/blog/gitlab-eks-integration-how-to/)). For the tutorial, I integrated GitLab with Amazon EKS in a GitLab group I created purposely for this, so all the projects created in the group can use the integration without any extra configuration. \n\nIn the previous blog post, we saw how seamless it is to create a Kubernetes cluster on Amazon EKS in GitLab with the right permissions. Developer productivity is greatly improved because there is no more need to manually set-up clusters and the same cluster can be used for multiple projects when Amazon EKS is integrated with GitLab at the group and instance levels, thus making onboarding new projects a breeze.\n\nIn this tutorial, we will be deploying a simple ruby Hello World application to our GitLab-managed Amazon EKS cluster. For the purpose of this tutorial, I have integrated GitLab with Amazon EKS at the group level on a group I own on GitLab.com, this way all projects created in the group can make use of the integration with no extra configuration.\n\n## A few things to note about AutoDevOps\n\nAuto DevOps provides pre-defined [CI/CD configuration](/topics/ci-cd/) which allows you to automatically detect, build, test, deploy, and monitor your applications. All you need to do is push your code and GitLab does the rest, saving you a lot of effort to set up the workflow and processes required to build, deploy, and monitor your project.\n\nYou'll need to execute the following steps for GitLab AutoDevOps to work seamlessly:\n\n* A [base domain](https://docs.gitlab.com/ee/user/project/clusters/#base-domain) name needs to be provided on GitLab’s integration page for Amazon EKS.\n\n ![AutoDevOps Base Domain](https://about.gitlab.com/images/blogimages/deploying-application-eks/base-domain.png){: .shadow.medium.center}\n Setting the base domain for Auto DevOps\n{: .note.text-center}\n\n* GitLab creates subdomains for every project that is deployed using the project slug, project ID and the base domain name. For example, the link `https://abubakar-te-demos-minimal-ruby-app-2.eksdemo-project.gitlabtechevangelism.net/` is automatically created where `abubakar-te-demos-minimal-ruby-app` is the project slug and the project ID of two, both prepended to the base domain name, `eksdemo-project.gitlabtechevangelism.net`.\n\n* Create a wildcard A-record for the base domain and point it to the Ingress endpoint created during the integration in the public-hosted zone of your domain name on Route53. Selecting the ALIAS option in Route 53 will present a list of resources you have already created. You will see your Ingress endpoint in the list of elastic load balancers. Alternatively, you can copy and paste from GitLab’s integration page.\n\n ![Route53 Alias for base Domain](https://about.gitlab.com/images/blogimages/deploying-application-eks/route53.png){: .shadow.small.center}\n Set-up alias for base domain using the generated Ingress endpoint.\n{: .note.text-center}\n\n* Install the pre-defined Kubernetes certificate management controller, certmanager on the GitLab - EKS integration, to ensure every URL created for your application has a Let’s Encrypt certificate.\n\n## Now, lets deploy our application\n\n### How to set-up the project\n\nIt takes five simple steps to set-up the project for your application.\n\nFirst, create a GitLab project from an existing sample, in this case, GitLab’s Auto DevOps example called Minimal Ruby App. There is nothing special about this application, it's just a ruby application you can use to try out the integration. If you integrated Amazon EKS at the group level on GitLab, you can just go ahead to create the project in the group. At the project level, you will have to perform the integration after creating the project.\n\nNext, copy the URL from the “Clone with HTTPS” field of the sample project, Minimal Ruby App:\n\n  ![Cloning over HTTPS](https://about.gitlab.com/images/blogimages/deploying-application-eks/https-clone.png){: .shadow.small.center}\n  The clone sample project.\n{: .note.text-center}\n\nThird, click the \"import project\" tab on the new project page, then click on the \"repo by URL\" button. Paste the URL you copied earlier in the text box for \"Git repository URL\" and click on \"create project\"\n\n  ![Importing Project](https://about.gitlab.com/images/blogimages/deploying-application-eks/import-project.png){: .shadow.medium.center}\n  The progress of the sample project import.\n  {: .note.text-center}\n\nNext, the project will be imported and all the files from the sample will be available in your new project.\n\n  ![Project import progress](https://about.gitlab.com/images/blogimages/deploying-application-eks/import-progress.png){: .shadow.medium.center}\n  The project import is completed.\n  {: .note.text-center}\n\nFinally, go to project settings > CI/CD > Auto DevOps and enable “Default to Auto DevOps pipeline”\n\n  ![Project Settings](https://about.gitlab.com/images/blogimages/deploying-application-eks/project-settings.png){: .shadow.medium.center}\n  Enable the Auto DevOps pipeline.\n  {: .note.text-center}\n\n### How to deploy your application\n\n* Now a pipeline is created and the project built, tested and deployed to production using the [default AutoDevOps CI files](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml).\n\n  ![Project Pipeline](https://about.gitlab.com/images/blogimages/deploying-application-eks/pipeline.png)\n  The first Auto DevOps pipeline.\n  {: .note.text-center}\n\n* Look inside the pipeline output to see the \"deployment to production\" line. This is where the URL is to access your application.\n\n  ![Deployment to production](https://about.gitlab.com/images/blogimages/deploying-application-eks/production-deploy.png)\n  Next, link to the deployed application.\n  {: .note.text-center}\n\n* In the image above, you can see the application has been deployed and can be accessed at `https://abubakar-te-demos-minimal-ruby-app-1.eksdemo-project.gitlabtechevangelism.net/`\n\nAnd it should show a “Hello World” message:\n\n  ![Deployed Application](https://about.gitlab.com/images/blogimages/deploying-application-eks/hello-world.png){: .shadow.medium.center}\n  The deployed application with \"Hello World\" message.\n  {: .note.text-center}\n\n## How to make changes to the deployed application\n\nIf any new changes are pushed, a different set of jobs is run to build, test, and review the changes before they can be merged to the master branch. I changed the \"Hello World\" text in the previous deployment to an HTML text in a new Git branch called `amazon-eks-html` using the GitLab WebIDE tool, and committed the changes.\n\n  ![Make changes to application](https://about.gitlab.com/images/blogimages/deploying-application-eks/new-commit.png)\n  Making new changes to application.\n  {: .note.text-center}\n\nWhile committing the changes, I selected \"start a new merge request (MR),\" which took me to the MR page where I added more information about the changes in a new MR.\n\n  ![New Merge request](https://about.gitlab.com/images/blogimages/deploying-application-eks/new-mr.png)\n  The MR to deploy the new application.\n  {: .note.text-center}\n\nIn the image above, you can see a pipeline is created to build, test and deploy using [Review Apps](https://docs.gitlab.com/ee/ci/review_apps/) to allow you review the changes before deploying to production.\n\n  ![New MR pipeline test](https://about.gitlab.com/images/blogimages/deploying-application-eks/new-mr-test.png)\n  MR with Review Apps\n  {: .note.text-center}\n\nOnce the review is finished, the application is deployed to a dedicated namespace in the Amazon EKS cluster for you to review before deploying to production. A URL for the [Review App](https://docs.gitlab.com/ee/ci/review_apps/) is provided, as shown in the image below.\n\n  ![Review Applications](https://about.gitlab.com/images/blogimages/deploying-application-eks/review-apps.png){: .shadow.medium.center}\n  The application in the Review App.\n  {: .note.text-center}\n\nThe `stop_review` job cleans up the Review App once the review is done. If MR approvals are required, the MR must be approved before being merged into the master branch. Once merged to master, the project is built, tested, and deployed to production.\n\n  ![Merged Change MR](https://about.gitlab.com/images/blogimages/deploying-application-eks/merged-mr.png){: .shadow.medium.center}\n  Deploying changes to production.\n  {: .note.text-center}\n\nThe image above shows that a second pipeline ran after the MR was merged. Once completed, a button is provided to `view app` and also see memory consumption as the app runs. The `view app`\"` button will open the application on the project's subdomain.\n\n  ![Updated application](https://about.gitlab.com/images/blogimages/deploying-application-eks/updated-site.png)\n  Changes deployed to production.\n  {: .note.text-center}\n\n## Deploy to Amazon EKS with Auto DevOps\n\nThe Auto DevOps function at GitLab makes deploying an application to the Amazon EKS cluster quite simple. Really, all you need to do is push code, and Auto DevOps automatically detects the programming language and uses the necessary [buildpack](https://buildpacks.io/) to test, build, and deploy your application. GitLab also takes making changes to your application a step further using Review Apps, which deploys your app to a temporary environment for you to review the app before deploying to production.\n\nIf you have questions about how to integrate GitLab with Amazon EKS to create a Kubernetes cluster, revisit the [first blog post](/blog/gitlab-eks-integration-how-to/).\n",[9,984,923,748],"features",{"slug":986,"featured":6,"template":688},"deploying-application-eks","content:en-us:blog:deploying-application-eks.yml","Deploying Application Eks","en-us/blog/deploying-application-eks.yml","en-us/blog/deploying-application-eks",{"_path":992,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":993,"content":999,"config":1006,"_id":1008,"_type":13,"title":1009,"_source":15,"_file":1010,"_stem":1011,"_extension":18},"/en-us/blog/deprecating-the-cert-based-kubernetes-integration",{"title":994,"description":995,"ogTitle":994,"ogDescription":995,"noIndex":6,"ogImage":996,"ogUrl":997,"ogSiteName":675,"ogType":676,"canonicalUrls":997,"schema":998},"Deprecating cert-based Kubernetes integration in GitLab 14.5","Understand why we're deprecating this integration, how it might affect you, and get a closer look at GitLab Agent for Kubernetes.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749670635/Blog/Hero%20Images/kubernetesterms.jpg","https://about.gitlab.com/blog/deprecating-the-cert-based-kubernetes-integration","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"We are deprecating the certificate-based integration with Kubernetes in GitLab 14.5\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2021-11-15\",\n      }",{"title":1000,"description":995,"authors":1001,"heroImage":996,"date":1002,"body":1003,"category":1004,"tags":1005},"We are deprecating the certificate-based integration with Kubernetes in GitLab 14.5",[765],"2021-11-15","\n\nWe are deprecating the certificate-based Kubernetes integration with GitLab and all the features that\nrely on it. This is the legacy integration, [introduced](/releases/2018/01/22/gitlab-10-4-released/#gitlab-clusters-now-generally-available) early in 2018, in GitLab 10.4.\n\nIn September 2020, we started to build a more robust, secure, forthcoming, and reliable integration\nwith Kubernetes and released the [GitLab Agent for Kubernetes](https://docs.gitlab.com/ee/user/clusters/agent/),\nwhich is the recommended methodology to connect clusters with GitLab.\n\nIn this post, we explain the reasons for the change of path, what to expect, and how this\naffects the features that rely on the certificate-based integration with Kubernetes.\n\n## What to expect\n\nThe deprecation of the certificate-based Kubernetes integration affects all the features\nthat require a cluster connected to GitLab through cluster certificates. All those features are deprecated. The certificate-based integrations will be switched off on gitlab.com starting with the GitLab 15.0 release. Self-managed users will be able to switch the features back until their final removal. [The final removal will happen](https://gitlab.com/gitlab-org/configure/general/-/issues/199) once all the collected, critical use-cases are supported with the agent and enough time was given for our users to migrate to the agent.\n\nIn regards to the existing features that rely on the certificate-based integration:\n\n- Some of the features will be migrated to use the GitLab Agent and we will\nprovide you with migration guides to help you follow along. We will communicate them\nthrough the following releases in our release posts, as usual.\n- If you already use features that depend on cluster certificates, you can keep using\nthem. But note that you might need to take extras steps in the future to migrate them\nto the Agent. However, we **do not** guarantee that we will migrate all the existing\ncertificate-based features to the Agent.\n- Existing users should not expect new functionality except for the developments required to support more recent Kubernetes versions, security and critical fixes, and community contributions. \n- If you currently do not use a deprecated feature and regardless decide to use it anyway,\nunderstand that there's a risk of having to migrate it to the Agent later, or, in the\nworst-case scenario, you might have to stop using the feature in the future.\n\nSee the updated list of the [affected features](https://docs.gitlab.com/ee/user/infrastructure/clusters/#deprecated-features) on the docs.\n\n## What this deprecation means\n\nThe deprecation means that we will not build more features on top of the existing features\nthat depend on cluster certificates. It doesn't mean that the features will stop working right now.\n\nNew features for Kubernetes clusters will be built on top of the connection between GitLab and\nyour cluster through the Agent rather than on top of the certificate-based connection.\n\nWe have [dedicated documentation](https://docs.gitlab.com/ee/user/infrastructure/clusters/migrate_to_gitlab_agent.html) to support you migrating from the certificate-based connections to agent-based connections.\n\n## What should you do for clusters not connected to GitLab yet\n\nTo connect new clusters with GitLab, use the [Agent](https://docs.gitlab.com/ee/user/clusters/agent/)\nso that you don't have to take extra steps to use the Agent later on.\n\n## Why we deprecated the certificate-based integration with Kubernetes\n\nThere were several reasons why we decided to rethink our approach to Kubernetes:\n\n- The certificate-based integration's biggest shortcoming is that it relies on direct\naccess to the Kubernetes API. Its exposure often comes with unacceptably high risk, especially for GitLab\nSaaS users.\n- The most valuable features within the integration required elevated privileges, often\nrequiring you to give cluster-admin rights to GitLab. At the same time, features that did\nnot need these privileges could not be restricted with more limited access. This means\nthat you had to grant full access to a rather simple feature, which could turn out as a liability.\n- Feedback from users implied that many of the features were never ready for production and\ncould be used only in limited situations.\n- The industry progressed, and pull-based deployments started to gain ground. And this approach\nwas mostly unknown when we built the integration.\n\nWe decided to address all these shortcomings with the GitLab Agent.\n\n## The advantages of the GitLab Agent\n\nThe integration with Kubernetes through the Agent provides many benefits compared to the\ncertificate-based integration, such as:\n\n- Security\n- Reliability\n- Scalability\n- Speed\n- Functionality\n\nCompared to the certificate-based integration, the Agent offers the following functionalities:\n\n- Configure your cluster through code. This enables a clear separation of duties and you can use well-known merge request workflows and approvals.\n- An agent can be configured using regular Kubernetes RBAC rules, maintaining access\nto your cluster safe.\n- Scaling to multiple environments is trivial as each agent connects to one environment.\n- An agent's connection to a cluster can be shared by other groups and projects to simplify\ncoordination and maintenance.\n- The Agent supports pull-based deployments, enabling modern GitOps approaches.\n- The Agent supports push-based deployments, enabling existing GitLab CI/CD workflows to\nremain functional.\n- Having a bi-directional channel between GitLab and the cluster enables a new set of integrations,\nlike surfacing container network security policy alerts and container scan results into GitLab.\n\n## What is next on the roadmap of the GitLab Agent\n\nWe identified a few high-value features on the list of deprecated features. Moreover, we know\nthat having some level of observability around the resources managed by the Agent is\nits biggest shortcoming. As a result, we are going to focus on the following three items first:\n\n- Provide [observability features for cluster resources](https://gitlab.com/groups/gitlab-org/-/epics/2493) so you can track your metrics and logs directly from GitLab.\n- [Auto DevOps and especially Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/stages.html#auto-deploy) can already by used on top of an Agent-based connection, but the setup is not easy. We will provide you with a solution soon.\n- [GitLab-Managed Clusters](https://docs.gitlab.com/ee/user/project/clusters/gitlab_managed_clusters.html#gitlab-managed-clusters-deprecated) are expected to work as they do today until we ship an equivalent or superior functionality\nbuilt around the Agent. Together with shipping this functionality, we will provide a migration guide if necessary.\n\n## We are listening\n\nPlease help us to help you. We need your feedback to help us prioritize the migration of the\ncurrent features to the Agent and to build new features based on the Agent. We are especially seeking\nfeed back around real-world, high-scale usage of the features built for using Kubernetes clusters with GitLab.\n\nIf you would be open to sharing your feedback, please start a new thread in [this epic](https://gitlab.com/groups/gitlab-org/configure/-/epics/8). Feel free to mention `@nagyv-gitlab` in your comment to make sure that your comment is read and the information won't be missed.\n","news",[9,232,813],{"slug":1007,"featured":6,"template":688},"deprecating-the-cert-based-kubernetes-integration","content:en-us:blog:deprecating-the-cert-based-kubernetes-integration.yml","Deprecating The Cert Based Kubernetes Integration","en-us/blog/deprecating-the-cert-based-kubernetes-integration.yml","en-us/blog/deprecating-the-cert-based-kubernetes-integration",{"_path":1013,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1014,"content":1020,"config":1027,"_id":1029,"_type":13,"title":1030,"_source":15,"_file":1031,"_stem":1032,"_extension":18},"/en-us/blog/docker-hub-rate-limit-monitoring",{"title":1015,"description":1016,"ogTitle":1015,"ogDescription":1016,"noIndex":6,"ogImage":1017,"ogUrl":1018,"ogSiteName":675,"ogType":676,"canonicalUrls":1018,"schema":1019},"How to make Docker Hub rate limit monitoring a breeze","Docker Hub Rate Limits are enforced and we need to find ways to monitor the remaining pull requests. Explore some ways to create a monitoring plugin for Nagios/Icinga/Sensu/Zabbix and test-drive a new Prometheus exporter in combination with Grafana.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681749/Blog/Hero%20Images/vidarnm-unsplash.jpg","https://about.gitlab.com/blog/docker-hub-rate-limit-monitoring","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to make Docker Hub rate limit monitoring a breeze\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Michael Friedrich\"}],\n        \"datePublished\": \"2020-11-18\",\n      }",{"title":1015,"description":1016,"authors":1021,"heroImage":1017,"date":1023,"body":1024,"category":683,"tags":1025},[1022],"Michael Friedrich","2020-11-18","\n\nWhen we learned about the [Docker Hub Rate Limit](/blog/mitigating-the-impact-of-docker-hub-pull-requests-limits/), we thought about ways to mitigate and analyse the new situation. Container images are widely used and adopted for sandbox environments in [CI/CD pipelines](/solutions/continuous-integration/) and cloud-native production environments with app deployment in [Kubernetes clusters](/solutions/kubernetes/).\n\n## What is meant by Docker Hub limits?\n\nEach `docker pull` request toward the central `hub.docker.com` container registry is being counted. When a defined limit is reached, future requests are blocked and might be delayed into the next free window. [CI/CD](/topics/ci-cd/) jobs cannot be executed anymore after receiving a HTTP error `429 - too many requests` and similar errors will be seen in production deployment logs for Kubernetes.\n\nDocker defines this limit with 100 anonymous requests every six hours for the client's source IP address. If you have multiple container deployments behind an IP address, for example a company DMZ using a NAT, this limit can be reached very fast. A similar problem happens with watchtower tools which try to keep your container images updated, for example on your self-managed GitLab Runner. The limit can be raised by logging in, and by getting a paid subscription.\n\nThe question is: Where can you see the current limit and the remaining pull requests?\n\n### How to check the Docker Hub request limit?\n\nThe [Docker documentation](https://docs.docker.com/docker-hub/download-rate-limit/#how-can-i-check-my-current-rate) suggests to use CLI commands which invoke `curl` HTTP requests against the Docker Hub registry and parse the JSON response with [jq](https://stedolan.github.io/jq/).\n\nDefine the `IMAGE` variable once for the following CLI commands to use:\n\n```shell\n$ IMAGE=\"ratelimitpreview/test\"\n```\n\nObtain a token for authorization. Optionally print the variable value to verify its content.\n\n```shell\n$ TOKEN=$(curl \"https://auth.docker.io/token?service=registry.docker.io&scope=repository:$IMAGE:pull\" | jq -r .token)\n\n$ echo $TOKEN\n```\n\nThe next step is to simulate a `docker pull` request. Instead of using `GET` as HTTP request method, a `HEAD` request is sent which does not count toward the rate limit. The response headers contain the keys `RateLimit-Limit` and `RateLimit-Remaining`.\n\n```shell\n$ curl --head -H \"Authorization: Bearer $TOKEN\" https://registry-1.docker.io/v2/$IMAGE/manifests/latest\n```\n\nThe limit in the example is `2500` with remaining `2495` pull requests. `21600` defines the limit time window as six hours.\n\n```\nRateLimit-Limit: 2500;w=21600\nRateLimit-Remaining: 2495;w=21600\n```\n\n`RateLimit-Reset` can be returned too, this will be the remaining time until the limits are reset.\n\n### Create a monitoring script\n\nThe CLI commands can be turned into a programming language of your choice which provides methods for HTTP requests and better response parsing. The algorithm needs to follow these steps:\n\n* Obtain an authorization token from Docker Hub. Username/password credentials can be optionally provided, otherwise the request happens anonymously.\n* Send a `HEAD` request to the Docker Hub registry and simulate a `docker pull` request\n* Parse the response headers and extract the values for `RateLimit-Limit` and `RateLimit-Remaining`\n* Print a summary of the received values\n\nA plugin script which can be used by Nagios/Icinga/Sensu/Zabbix and others has additional requirements. It needs to implement the [Monitoring Plugins API specification](https://www.monitoring-plugins.org/doc/guidelines.html):\n\n* Print the limit and remaining count\n* Calculate a state: Ok, Warning, Critical, Unknown and print a helpful text on the shell\n* Add optional warning/critical thresholds for the remaining count. Whenever the count is lower than the threshold, the state changes to Warning/Critical and the exit code changes: `OK=0, Warning=1, Critical=2, Unknown=3`\n* Collect limit values as performance metrics for graphing and visualization\n* Add verbose mode and timeout parameters as plugin development best practices. If Docker Hub does not respond within 10 seconds as default, the plugin exits and returns `Unknown` as state.\n\nYou can download the [check_docker_hub_limit.py plugin script](https://gitlab.com/gitlab-com/marketing/corporate_marketing/developer-evangelism/code/check-docker-hub-limit) and integrate it into your monitoring environment.\n\n#### Use the monitoring plugin script\n\nThe [check_docker_hub_limit.py plugin script](https://gitlab.com/gitlab-com/marketing/corporate_marketing/developer-evangelism/code/check-docker-hub-limit) plugin is written in Python 3 and requires the `requests` library. Follow the [installation instructions](https://gitlab.com/gitlab-com/marketing/corporate_marketing/developer-evangelism/code/check-docker-hub-limit#installation) and run the plugin script with the `--help` parameter to see all available options:\n\n```\n$ python check_docker_hub_limit.py --help\n\nusage: check_docker_hub_limit.py [-h] [-w WARNING] [-c CRITICAL] [-v] [-t TIMEOUT]\n\nVersion: 2.0.0\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -w WARNING, --warning WARNING\n                        warning threshold for remaining\n  -c CRITICAL, --critical CRITICAL\n                        critical threshold for remaining\n  -v, --verbose         increase output verbosity\n  -t TIMEOUT, --timeout TIMEOUT\n                        Timeout in seconds (default 10s)\n```\n\nRun the script to fetch the current remaining count. The plugin script exit code returns `0` being OK.\n\n```\n$ python3 check_docker_hub_limit.py\nOK - Docker Hub: Limit is 5000 remaining 4997|'limit'=5000 'remaining'=4997\n\n$ echo $?\n0\n```\n\nSpecify the warning threshold with `10000` pulls, and the critical threshold with `3000`.\nThe example shows how the state changes to `WARNING` with a current count of `4999` remaining\npull requests. The plugin script exit code changes to `1`.\n\n```\n$ python3 check_docker_hub_limit.py -w 10000 -c 3000\nWARNING - Docker Hub: Limit is 5000 remaining 4999|'limit'=5000 'remaining'=4999\n\n$ echo $?\n1\n```\n\nSpecify a higher critical threshold with `5000`. When the remaining count goes below this value,\nthe plugin script returns `CRITICAL` and changes the exit state into `2`.\n\n```\n$ python3 check_docker_hub_limit.py -w 10000 -c 5000\nCRITICAL - Docker Hub: Limit is 5000 remaining 4998|'limit'=5000 'remaining'=4998\n\n$ echo $?\n2\n```\n\nWhen a timeout is reached, or another error is thrown, the exit state switches to `3` and the output state becomes `UNKNOWN`.\n\n### Use a Prometheus exporter for rate limit metrics\n\n[Prometheus](https://prometheus.io/) scrapes metrics from HTTP endpoints. There is a variety of exporters for Prometheus to monitor host systems, HTTP endpoints, containers, databases, etc. Prometheus provides [client libraries](https://prometheus.io/docs/instrumenting/clientlibs/) to make it easier to start writing your own custom exporter. The metrics need to be exported in a [defined format](https://prometheus.io/docs/instrumenting/exposition_formats/).\n\nThe Docker Hub limit values can be fetched with obtaining an authorization token first, and then sending a `HEAD` request shown above. The code algorithm follows the ideas of the monitoring plugin. Instead of printing the values onto the shell, the metric values are exposed with an HTTP server. The Prometheus client libraries provide this functionality built-in.\n\nWe have created a [Prometheus Exporter for Docker Hub Rate Limits](https://gitlab.com/gitlab-com/marketing/corporate_marketing/developer-evangelism/code/docker-hub-limit-exporter) using the [Python client library](https://github.com/prometheus/client_python). The repository provides a demo environment with `docker-compose` which starts the exporter, Prometheus and Grafana.\n\nEnsure that [docker-compose is installed](https://docs.docker.com/compose/install/) and clone/download the repository. Then run the following commands:\n\n```\n$ cd example/docker-compose\n\n$ docker-compose up -d\n```\n\nNavigate to `http://localhost:3030` to access Grafana and explore the demo environment with the pre-built dashboard.\n\n![Grafana dashboard for Docker Hub Limit Prometheus Exporter](https://about.gitlab.com/images/blogimages/docker-hub-limit-monitoring/grafana_prometheus_docker_hub_limit_exporter_demo.png){: .shadow.medium.center}\n\nGrafana dashboard for Docker Hub Limits\n{: .note.text-center}\n\n### More monitoring/observability ideas\n\nUse the steps explained in this blog post to add Docker Hub limit monitoring. Evaluate the Prometheus exporter or the check plugin, or create your own monitoring scripts. Fork the repositories and send a MR our way!\n\n* [check-docker-hub-limit for Nagios/Icinga/Zabbix/Sensu](https://gitlab.com/gitlab-com/marketing/corporate_marketing/developer-evangelism/code/check-docker-hub-limit)\n* [docker-hub-limit-exporter for Prometheus](https://gitlab.com/gitlab-com/marketing/corporate_marketing/developer-evangelism/code/docker-hub-limit-exporter)\n\nThe Prometheus exporter and the monitoring plugin script can help to see trends and calculate usage over time. Use your own local (GitLab) container registry or one of the available caching methods described in these blog posts:\n\n* [Cache Docker images in your CI/CD infrastructure](/blog/mitigating-the-impact-of-docker-hub-pull-requests-limits/). Use this resource for possible solutions around caching and proxying.\n* [Use the Dependency Proxy](/blog/minor-breaking-change-dependency-proxy/). Learn more about the GitLab Dependency Proxy being made open source in the future.\n* [#everyonecancontribute cafe: Docker Hub Rate Limit: Mitigation, Caching and Monitoring](https://everyonecancontribute.com/post/2020-11-04-cafe-7-docker-hub-rate-limit-monitoring/). This is a community meetup hosted by Developer Evangelists at GitLab. The blog post includes a video with more insights and discussion.\n\nPhoto by [Vidar Nordli-Mathisen](https://unsplash.com/@vidarnm) from [Unsplash](https://www.unsplash.com).\n{: .note}\n",[727,685,9,835,1026],"production",{"slug":1028,"featured":6,"template":688},"docker-hub-rate-limit-monitoring","content:en-us:blog:docker-hub-rate-limit-monitoring.yml","Docker Hub Rate Limit Monitoring","en-us/blog/docker-hub-rate-limit-monitoring.yml","en-us/blog/docker-hub-rate-limit-monitoring",{"_path":1034,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1035,"content":1041,"config":1046,"_id":1048,"_type":13,"title":1049,"_source":15,"_file":1050,"_stem":1051,"_extension":18},"/en-us/blog/fantastic-infrastructure-as-code-security-attacks-and-how-to-find-them",{"title":1036,"description":1037,"ogTitle":1036,"ogDescription":1037,"noIndex":6,"ogImage":1038,"ogUrl":1039,"ogSiteName":675,"ogType":676,"canonicalUrls":1039,"schema":1040},"Fantastic Infrastructure as Code security attacks and how to find them","Learn about possible attack scenarios in Infrastructure as Code and GitOps environments, evaluate tools and scanners with Terraform, Kubernetes, etc., and more.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749667482/Blog/Hero%20Images/cover-image-unsplash.jpg","https://about.gitlab.com/blog/fantastic-infrastructure-as-code-security-attacks-and-how-to-find-them","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Fantastic Infrastructure as Code security attacks and how to find them\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Michael Friedrich\"}],\n        \"datePublished\": \"2022-02-17\",\n      }",{"title":1036,"description":1037,"authors":1042,"heroImage":1038,"date":1043,"body":1044,"category":790,"tags":1045},[1022],"2022-02-17","[Infrastructure as Code](/topics/gitops/infrastructure-as-code/)(IaC) has\neaten the world. It helps manage and provision computer resources\nautomatically and avoids manual work or UI form workflows. Lifecycle\nmanagement with IaC started with declarative and idempotent configuration,\npackage, and tool installation. In the era of cloud providers, IaC tools\nadditionally help abstract cloud provisioning. They can create defined\nresources automatically (network, storage, databases, etc.) and apply the\nconfiguration (DNS entries, firewall rules, etc.).\n\n\nLike everything else, it has its flaws. IaC workflows have shifted left in\nthe development lifecycle, making it more efficient. Developers and DevOps\nengineers need to learn new tools and best practices. Mistakes may result in\nleaked credentials or supply chain attacks. Existing security assessment\ntools might not be able to detect these new vulnerabilities.\n\n\nIn this post, we will dive into these specific risks and focus on IaC\nmanagement tools such as Terraform, cloud providers, and deployment\nplatforms involving containers and Kubernetes.\n\n\nFor each scenario, we will look into threats, tools, integrations, and best\npractices to reduce risk.\n\n\nYou can read the blog post top-down or navigate into the chapters\nindividually.\n\n\n- [Scan your own infrastructure - know what's\nimportant](#scan-your-infrastructure---know-what-is-important)\n    - [Thinking like an attacker](#thinking-like-an-attacker)\n- [Tools to detect Terraform\nvulnerabilities](#tools-to-detect-terraform-vulnerabilities)\n\n- [Develop more IaC scenarios](#develop-more-iac-scenarios)\n    - [Terraform Module Dependency Scans](#terraform-module-dependency-scans)\n    - [IaC Security Scanning for Containers](#iac-security-scanning-for-containers)\n    - [IaC Security Scanning with Kubernetes](#iac-security-scanning-with-kubernetes)\n- [Integrations into CI/CD and Merge Requests for\nReview](#integrations-into-cicd-and-merge-requests-for-review)\n    - [Reports in MRs as comment](#reports-in-mrs-as-comment)\n    - [MR Comments using GitLab IaC SAST reports as source](#mr-comments-using-gitlab-iac-sast-reports-as-source)\n- [What is the best integration\nstrategy?](#what-is-the-best-integration-strategy)\n\n\n## Scan your infrastructure - know what is important\n\n\nStart with identifying the project/group responsible for managing the IAC\ntasks. An inventory search for specific IaC tools, file suffixes (Terraform\nuses `.tf`, for example), and languages can be helpful. The security scan\ntools discussed in this blog post will discover all supported types\nautomatically. Once you have identified the projects, you can use one of the\ntools to run a scan and identify the detected possible vulnerabilities.\n\n\nThere might not be any scan results because your infrastructure is secure at\nthis time. Though, your processes may require you to create documentation,\nrunbooks, and action items for eventually discovered vulnerabilities in the\nfuture. Creating a forecast on possible scenarios to defend is hard, so let\nus change roles from the defender to the attacker for a moment. Which\nsecurity vulnerabilities are out there to exploit as a malicious attacker?\nMaybe it is possible to create vulnerable scenarios and simulate the\nattacker role by running a security scan.\n\n\n### Thinking like an attacker\n\n\nThere can be noticeable potential vulnerabilities like plaintext passwords\nin the configuration. Other scenarios involve cases you would never think of\nor a chain of items causing a security issue.\n\n\nLet us create a scenario for an attacker by provisioning an S3 bucket in AWS\nwith Terraform. We intend to store logs, database dumps, or credential\nvaults in this S3 bucket.\n\n\nThe following example creates the `aws_s3_bucket` resource in Terraform\nusing the AWS provider.\n\n\n```hcl\n\n# Create the bucket\n\nresource \"aws_s3_bucket\" \"demobucket\" {\n  bucket = \"terraformdemobucket\"\n  acl = \"private\"\n}\n\n```\n\n\nAfter provisioning the S3 bucket for the first time, someone decided to make\nthe S3 bucket accessible by default. The example below grants public access\nto the bucket using `aws_s3_bucket_public_access_block`. `block_public_acls`\nand `block_public_policy` are set to `false` to allow any public access.\n\n\n```\n\n# Grant bucket access: public\n\nresource \"aws_s3_bucket_public_access_block\" \"publicaccess\" {\n  bucket = aws_s3_bucket.demobucket.id\n  block_public_acls = false\n  block_public_policy = false\n}\n\n```\n\n\nThe S3 bucket is now publicly readable, and anyone who knows the URL or\nscans network ranges for open ports may find the S3 bucket and its data.\nMalicious actors can not only capture credentials but also may learn about\nyour infrastructure, IP addresses, internal server FQDNs, etc. from the\nlogs, backups, and database dumps being stored in the S3 bucket.\n\n\nWe need ways to mitigate and detect this security problem. The following\nsections describe the different tools you can use. The full Terraform code\nis located in [this\nproject](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/tree/main/terraform/aws)\nand allows you to test all tools described in this blog post.\n\n\n## Tools to detect Terraform vulnerabilities\n\n\nIn the \"not worst case\" scenario, the Terraform code to manage your\ninfrastructure is persisted at a central Git server and not hidden somewhere\non a host or local desktop. Maybe you are using `terraform init, plan,\napply` jobs in CI/CD pipelines already. Let us look into methods and tools\nthat help detect the public S3 bucket vulnerability. Later, we will discuss\nCI/CD integrations and automating IaC security scanning.\n\n\nBefore we dive into the tools, make sure to clone the demo project locally\nto follow the examples yourself.\n\n\n```shell\n\n$ cd /tmp\n\n$ git clone\nhttps://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning.git\n&& cd  infrastructure-as-code-scanning/\n\n```\n\n\nThe tool installation steps in this blog post are illustrated with [Homebrew\non macOS](https://brew.sh/). Please refer to the tools documentation for\nalternative installation methods and supported platforms.\n\n\nYou can follow the tools for Terraform security scanning by reading\ntop-down, or navigate into the tools sections directly:\n\n\n- [tfsec](#tfsec)\n\n- [kics](#kics)\n\n- [terrascan](#terrascan)\n\n- [semgrep](#semgrep)\n\n- [tflint](#tflint)\n\n\n### tfsec\n\n\n[tfsec](https://github.com/aquasecurity/tfsec) from Aqua Security can help\ndetect Terraform vulnerabilities. There are [Docker images\navailable](https://github.com/aquasecurity/tfsec#use-with-docker) to quickly\ntest the scanner on the CLI, or binaries to [install\ntfsec](https://aquasecurity.github.io/tfsec/v1.1.4/getting-started/installation/).\nRun `tfsec` on the local project path `terraform/aws/` to get a list of\nvulnerabilities.\n\n\n```shell\n\n$ brew install tfsec\n\n$ tfsec terraform/aws/\n\n```\n\n\nThe default scan provides a table overview on the CLI, which may need\nadditional filters. Inspect `tfsec –help` to get a list of all available\n[parameters](https://aquasecurity.github.io/tfsec/v1.1.4/getting-started/usage/)\nand try generating JSON and JUnit output files to process further.\n\n\n```shell\n\n$ tfsec terraform/aws --format json --out tfsec-report.json\n\n1 file(s) written: tfsec-report.json\n\n$ tfsec terraform/aws --format junit --out tfsec-junit.xml\n\n1 file(s) written: tfsec-junit.xml\n\n```\n\n\nThe full example is located in the [terraform/aws directory in this\nproject](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/tree/main/terraform/aws).\n\n\n#### Parse tfsec JSON reports with jq\n\n\nIn an earlier blog post, we shared [how to detect the JSON data structures\nand filter with chained jq\ncommands](/blog/devops-workflows-json-format-jq-ci-cd-lint/). The\ntfsec report is a good practice: Extract the `results` key, iterate through\nall array list items and filtered by `rule_service` being `s3`, and only\nprint `severity`, `description` and `location.filename`.\n\n\n```shell\n\n$ jq \u003C tfsec-report.json | jq -c '.[\"results\"]' | jq -c '.[] | select\n(.rule_service == \"s3\") | [.severity, .description, .location.filename]'\n\n```\n\n\n![tfsec parser output\nexample](https://about.gitlab.com/images/blogimages/iac-security-scanning/tfsec-json-jq-parser.png){:\n.shadow}\n\n\n### kics\n\n\n[kics](https://kics.io/) is another IaC scanner, providing support for many\ndifferent tools (Ansible, Terraform, Kubernetes, Dockerfile, and cloud\nconfiguration APIs such as AWS CloudFormation, Azure Resource Manager, and\nGoogle Deployment Manager).\n\n\nLet's try it: [Install kics](https://docs.kics.io/latest/getting-started/)\nand run it on the vulnerable project. `--report-formats`, `--output-path`\nand `--output-name` allow you to create a JSON report which can be\nautomatically parsed with additional tooling.\n\n\n```shell\n\n$ kics scan --path .\n\n$ kics scan --path . --report-formats json --output-path kics --output-name\nkics-report.json\n\n```\n\n\nParsing the JSON report from `kics` with jq works the same way as the tfsec\nexample above. Inspect the data structure and nested object, and filter by\nAWS as `cloud_provider`. The `files` entry is an array of dictionaries,\nwhich turned out to be a little tricky to extract with an additional\n`(.files[] | .file_name )` to add:\n\n\n```\n\n$ jq \u003C kics/kics-report.json | jq -c '.[\"queries\"]' | jq -c '.[] | select\n(.cloud_provider == \"AWS\") | [.severity, .description, (.files[] |\n.file_name ) ]'\n\n```\n\n\n![kics json jq\nparser](https://about.gitlab.com/images/blogimages/iac-security-scanning/kics-json-jq-parser.png){:\n.shadow}\n\n\n`kics` returns different [exit\ncodes](https://docs.kics.io/latest/results/#exit_status_code) based on the\nnumber of different severities found. `50` indicates `HIGH` severities and\ncauses your CI/CD pipeline to fail.\n\n\n### checkov\n\n\n[Checkov](https://checkov.io) supports Terraform (for AWS, GCP, Azure and\nOCI), CloudFormation, ARM, Severless framework, Helm charts, Kubernetes, and\nDocker.\n\n\n```shell\n\n$ brew install checkov\n\n$ checkov --directory .\n\n```\n\n\n### terrascan\n\n\n[Terrascan](https://runterrascan.io/docs/getting-started/) supports\nTerraform, and more [policies](https://runterrascan.io/docs/policies/) for\ncloud providers, Docker, and Kubernetes.\n\n\n```shell\n\n$ brew install terrascan\n\n$ terrascan scan .\n\n```\n\n\n### semgrep\n\n\nSemgrep is working on [Terraform\nsupport](https://semgrep.dev/docs/language-support/), currently in Beta. It\nalso detects Dockerfile errors - for example invalid port ranges and\nmultiple ranges, similar to kics.\n\n\n```shell\n\n$ brew install semgrep\n\n$ semgrep --config auto .\n\n```\n\n\n### tflint\n\n\n[tflint](https://github.com/terraform-linters/tflint) also is an alternative\nscanner.\n\n\n## Develop more IaC scenarios\n\n\nWhile testing IaC Security Scanners for the first time, I was looking for\ndemo projects and examples. The [kics queries list for\nTerraform](https://docs.kics.io/latest/queries/terraform-queries/) provides\nan exhaustive list of all vulnerabilities and the documentation linked. From\nthere, you can build and create potential attack vectors for demos and\nshowcases without leaking your company code and workflows.\n\n\n[Terragoat](https://github.com/bridgecrewio/terragoat) also is a great\nlearning resource to test various scanners and see real-life examples for\nvulnerabilities.\n\n\n```shell\n\n$ cd /tmp && git clone https://github.com/bridgecrewio/terragoat.git && cd\nterragoat\n\n\n$ tfsec .\n\n$ kics scan --path .\n\n$ checkov --directory .\n\n$ semgrep --config auto .\n\n$ terrascan scan .\n\n```\n\n\nIt is also important to verify the reported vulnerabilities and create\ndocumentation for required actions for your teams. Not all detected\nvulnerabilities are necessarily equally critical in your environment. With\nthe rapid development of IaC,\n[GitOps}(https://about.gitlab.com/topics/gitops/), and cloud-native\nenvironments, it can also be a good idea to use 2+ scanners to see if there\nare missing vulnerabilities on one or the other.\n\n\nThe following sections discuss more scenarios in detail.\n\n\n- [Terraform Module Dependency Scans](#terraform-module-dependency-scans)\n\n- [IaC Security Scanning for\nContainers](#iac-security-scanning-for-containers)\n\n- [IaC Security Scanning with\nKubernetes](#iac-security-scanning-with-kubernetes)\n\n\n### Terraform Module Dependency Scans\n\n\nRe-usable IaC workflows also can introduce security vulnerabilities you are\nnot aware of. [This\nproject](https://gitlab.com/gitlab-de/use-cases/iac-tf-vuln-module) provides\nthe module files and package in the registry, which can be consumed by\n`main.tf` in the demo project.\n\n\n```hcl\n\nmodule \"my_module_name\" {\n  source = \"gitlab.com/gitlab-de/iac-tf-vuln-module/aws\"\n  version = \"1.0.0\"\n}\n\n```\n\n\nkics has [limited support for the official Terraform module\nregistry](https://docs.kics.io/latest/platforms/#terraform_modules),\n`checkov` failed to download private modules, `terrascan` and `tfsec` work\nwhen `terraform init` is run before the scan. Depending on your\nrequirements, running `kics` for everything and `tfsec` for module\ndependency checks can be a solution, suggestion added\n[here](https://gitlab.com/groups/gitlab-org/-/epics/6653#note_840447132).\n\n\n### IaC Security Scanning for Containers\n\n\nSecurity problems in containers can lead to application deployment\nvulnerabilities. The [kics query\ndatabase](https://docs.kics.io/latest/queries/dockerfile-queries/) helps to\nreverse engineer more vulnerable examples: Using the latest tag, privilege\nescalations with invoking sudo in a container, ports out of range, and\nmultiple entrypoints are just a few bad practices.\n\n\nThe following\n[Dockerfile](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/blob/main/Dockerfile)\nimplements example vulnerabilities for the scanners to detect:\n\n\n```\n\n# Create vulnerabilities based on kics queries in\nhttps://docs.kics.io/latest/queries/dockerfile-queries/\n\nFROM debian:latest\n\n\n# kics: Run Using Sudo\n\n# kics: Run Using apt\n\nRUN sudo apt install git\n\n\n# kics: UNIX Ports Out Of Range\n\nEXPOSE 99999\n\n\n# kics: Multiple ENTRYPOINT Instructions Listed\n\nENTRYPOINT [\"ex1\"]\n\nENTRYPOINT [\"ex2\"]\n\n```\n\n\nKics, tfsec, and terrascan can detect `Dockerfile` vulnerabilities, similar\nto semgrep and checkov. As an example scanner, terrascan can detect the\nvulnerabilities using the `--iac-type docker` parameter that allows to\nfilter the scan type.\n\n\n```shell\n\n$ terrascan scan --iac-type docker\n\n```\n\n\n![terrascan Docker IaC type scan\nresult](https://about.gitlab.com/images/blogimages/iac-security-scanning/terrascan-docker-iac.png){:\n.shadow}\n\n\nYou can run kics and tfsec as an exercise to verify the results.\n\n\n### IaC Security Scanning with Kubernetes\n\n\nSecuring a Kubernetes cluster can be a challenging task. Open Policy Agent,\nKyverno, RBAC, etc., and many different YAML configuration attributes\nrequire reviews and automated checks before the production deployments.\n[Cluster image\nscanning](https://docs.gitlab.com/ee/user/clusters/agent/vulnerabilities.html)\nis one way to mitigate security threats, next to [Container\nscanning](https://docs.gitlab.com/ee/user/application_security/container_scanning/)\nfor the applications being deployed. A suggested read is the book [“Hacking\nKubernetes”\nbook](https://www.oreilly.com/library/view/hacking-kubernetes/9781492081722/)\nby Andrew Martin and Michael Hausenblas if you want to dive deeper into\nKubernetes security and attack vectors.\n\n\nIt's possible to make mistakes when, for example, copying YAML example\nconfiguration and continue using it. I've created a deployment and service\nfor a [Kubernetes monitoring\nworkshop](/handbook/marketing/developer-relations/developer-evangelism/projects/#practical-kubernetes-monitoring-with-prometheus),\nwhich provides a practical example to learn but also uses some not so good\npractices.\n\n\nThe following configuration in\n[ecc-demo-service.yml](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/blob/main/kubernetes/ecc-demo-service.yml)\nintroduces vulnerabilities and potential production problems:\n\n\n```yaml\n\n---\n\n# A deployment for the ECC Prometheus demo service with 3 replicas.\n\napiVersion: apps/v1\n\nkind: Deployment\n\nmetadata:\n  name: ecc-demo-service\n  labels:\n    app: ecc-demo-service\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: ecc-demo-service\n  template:\n    metadata:\n      labels:\n        app: ecc-demo-service\n    spec:\n      containers:\n      - name: ecc-demo-service\n        image: registry.gitlab.com/everyonecancontribute/observability/prometheus_demo_service:latest\n        imagePullPolicy: IfNotPresent\n        args:\n        - -listen-address=:80\n        ports:\n        - containerPort: 80\n---\n\n# A service that references the demo service deployment.\n\napiVersion: v1\n\nkind: Service\n\nmetadata:\n  name: ecc-demo-service\n  labels:\n    app: ecc-demo-service\nspec:\n  ports:\n  - port: 80\n    name: web\n  selector:\n    app: ecc-demo-service\n```\n\n\nLet's scan the Kubernetes manifest with kics and parse the results again\nwith jq. A list of kics queries for Kubernetes can be found in the [kics\ndocumentation](https://docs.kics.io/latest/queries/kubernetes-queries/).\n\n\n```shell\n\n$ kics scan --path kubernetes --report-formats json --output-path kics\n--output-name kics-report.json\n\n\n$ jq \u003C kics/kics-report.json | jq -c '.[\"queries\"]' | jq -c '.[] | select\n(.platform == \"Kubernetes\") | [.severity, .description, (.files[] |\n.file_name ) ]'\n\n```\n\n\n![Kubernetes manifest scans and jq parser results with\nkics](https://about.gitlab.com/images/blogimages/iac-security-scanning/kics-kubernetes-jq-parser.png){:\n.shadow}\n\n\n[Checkov](https://www.checkov.io/) detects similar vulnerabilities with\nKubernetes.\n\n\n```\n\n$ checkov --directory kubernetes/\n\n$ checkov --directory kubernetes -o json > checkov-report.json\n\n```\n\n\n[kube-linter](https://docs.kubelinter.io/#/?id=installing-kubelinter)\nanalyzes Kubernetes YAML files and Helm charts for production readiness and\nsecurity.\n\n\n```shell\n\n$ brew install kube-linter\n\n$ kube-linter lint kubernetes/ecc-demo-service.yml --format json >\nkube-linter-report.json\n\n```\n\n\n[kubesec](https://kubesec.io/) provides security risk analysis for\nKubernetes resources. `kubesec` is also integrated into the [GitLab SAST\nscanners](https://docs.gitlab.com/ee/user/application_security/sast/#enabling-kubesec-analyzer).\n\n\n```shell\n\n$ docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin \u003C\nkubernetes/ecc-demo-service.yml\n\n```\n\n\n## Integrations into CI/CD and Merge Requests for Review\n\n\nThere are many scanners out there, and most of them return the results in\nJSON which can be parsed and integrated into your CI/CD pipelines. You can\nlearn more about the evaluation of GitLab IaC scanners in [this\nissue](https://gitlab.com/gitlab-org/gitlab/-/issues/39695). The table in\nthe issue includes licenses, languages, outputs, and examples.\n\n\n`checkov` and `tfsec` provide JUnit XML reports as output format, which can\nbe parsed and integrated into CI/CD. Vulnerability reports will need a\ndifferent format though to not confuse them with unit test results for\nexample. Integrating a SAST scanner in GitLab requires you to provide\n[artifacts:reports:sast](https://docs.gitlab.com/ee/ci/yaml/artifacts_reports.html#artifactsreportssast)\nas a specified output format and API. [This\nreport](https://docs.gitlab.com/ee/user/application_security/iac_scanning/#reports-json-format)\ncan then be consumed by GitLab integrations such as MR widgets and\nvulnerability dashboards, available in the Ultimate tier. The following\nscreenshot shows adding a Kubernetes deployment and service with potential\nvulnerabilities in [this\nMR](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/merge_requests/3).\n\n\n![MR widget showing IaC vulnerabilities with\nKubernetes](https://about.gitlab.com/images/blogimages/iac-security-scanning/gitlab-iac-mr-widget-kubernetes.png){:\n.shadow}\n\n\n### Reports in MRs as comment\n\n\nThere are different ways to collect the JSON reports in your CI/CD pipelines\nor scheduled runs. One of the ideas can be creating a merge request comment\nwith a Markdown table. It needs a bit more work with parsing the reports,\nformatting the comment text, and interacting with the GitLab REST API, shown\nin the following steps in a Python script. You can follow the implementation\nsteps to re-create them in your preferred language for the scanner type and\nuse [GitLab API clients](/partners/technology-partners/#api-clients).\n\n\nFirst, read the report in JSON format, and inspect whether `kics_version` is\nset to continue. Then extract the `queries` key, and prepare the\n`comment_body` with the markdown table header columns.\n\n\n```python\n\nFILE=\"kics/kics-report.json\"\n\n\nf = open(FILE)\n\nreport = json.load(f)\n\n\n# Parse the report: kics\n\nif \"kics_version\" in report:\n    print(\"Found kics '%s' in '%s'\" % (report[\"kics_version\"], FILE))\n    queries = report[\"queries\"]\nelse:\n    raise Exception(\"Unsupported report format\")\n\ncomment_body = \"\"\"### kics vulnerabilities report\n\n\n| Severity | Description | Platform | Filename |\n\n|----------|-------------|----------|----------|\n\n\"\"\"\n\n```\n\n\nNext, we need to parse all queries in a loop, and collect all column values.\nThey are collected into a new list, which then gets joined with the `|`\ncharacter. The `files` key needs a nested collection, as this is a list of\ndictionaries where only the `file_name` is of interest for the demo.\n\n\n```python\n\n# Example query to parse: {'query_name': 'Service Does Not Target Pod',\n'query_id': '3ca03a61-3249-4c16-8427-6f8e47dda729', 'query_url':\n'https://kubernetes.io/docs/concepts/services-networking/service/',\n'severity': 'LOW', 'platform': 'Kubernetes', 'category': 'Insecure\nConfigurations', 'description': 'Service should Target a Pod',\n'description_id': 'e7c26645', 'files': [{'file_name':\n'kubernetes/ecc-demo-service.yml', 'similarity_id':\n'9da6166956ad0fcfb1dd533df17852342dcbcca02ac559becaf51f6efdc015e8', 'line':\n38, 'issue_type': 'IncorrectValue', 'search_key':\n'metadata.name={{ecc-demo-service}}.spec.ports.name={{web}}.targetPort',\n'search_line': 0, 'search_value': '', 'expected_value':\n'metadata.name={{ecc-demo-service}}.spec.ports={{web}}.targetPort has a Pod\nPort', 'actual_value':\n'metadata.name={{ecc-demo-service}}.spec.ports={{web}}.targetPort does not\nhave a Pod Port'}]}\n\n\nfor q in queries:\n    #print(q) # DEBUG\n    l = []\n    l.append(q[\"severity\"])\n    l.append(q[\"description\"])\n    l.append(q[\"platform\"])\n\n    if \"files\" in q:\n        l.append(\",\".join((f[\"file_name\"] for f in q[\"files\"])))\n\n    comment_body += \"| \" + \" | \".join(l) + \" |\\n\"\n\nf.close()\n\n```\n\n\nThe markdown table has been prepared, so now it is time to communicate with\nthe GitLab API.\n[python-gitlab](https://python-gitlab.readthedocs.io/en/stable/api-usage.html)\nprovides a great abstraction layer with programmatic interfaces.\n\n\nThe GitLab API needs a project/group access token with API permissions. The\n`CI_JOB_TOKEN` is not sufficient.\n\n\n![Set the Project Access Token as CI/CD variable, not\nprotected](https://about.gitlab.com/images/blogimages/iac-security-scanning/gitlab-cicd-variable-project-access-token.png){:\n.shadow}\n\n\nRead the `GITLAB_TOKEN` from the environment, and instantiate a new `Gitlab`\nobject.\n\n\n```python\n\nGITLAB_URL='https://gitlab.com'\n\n\nif 'GITLAB_TOKEN' in os.environ:\n    gl = gitlab.Gitlab(GITLAB_URL, private_token=os.environ['GITLAB_TOKEN'])\nelse:\n    raise Exception('GITLAB_TOKEN variable not set. Please provide an API token to update the MR!')\n```\n\n\nNext, use the `CI_PROJECT_ID` CI/CD variable from the environment to select\nthe [project\nobject](https://python-gitlab.readthedocs.io/en/stable/gl_objects/projects.html)\nwhich contains the merge request we want to target.\n\n\n```python\n\nproject = gl.projects.get(os.environ['CI_PROJECT_ID'])\n\n```\n\n\nThe tricky part is to fetch the [merge\nrequest](https://python-gitlab.readthedocs.io/en/stable/gl_objects/merge_requests.html)\nID from the CI/CD pipeline, it is not always available. A workaround can be\nto read the `CI_COMMIT_REF_NAME` variable and match it against all MRs in\nthe project, looking if the `source_branch` matches.\n\n\n```python\n\nreal_mr = None\n\n\nif 'CI_MERGE_REQUEST_ID' in os.environ:\n    mr_id = os.environ['CI_MERGE_REQUEST_ID']\n    real_mr = project.mergerequests.get(mr_id)\n\n# Note: This workaround can be very expensive in projects with many MRs\n\nif 'CI_COMMIT_REF_NAME' in os.environ:\n    commit_ref_name = os.environ['CI_COMMIT_REF_NAME']\n\n    mrs = project.mergerequests.list()\n\n    for mr in mrs:\n        if mr.source_branch in commit_ref_name:\n            real_mr = mr\n            # found the MR for this source branch\n            # print(mr) # DEBUG\n\nif not real_mr:\n    print(\"Pipeline not run in a merge request, no reports sent\")\n    sys.exit(0)\n```\n\n\nLast but not least, use the MR object to [create a new\nnote](https://python-gitlab.readthedocs.io/en/stable/gl_objects/notes.html)\nwith the `comment_body` including the Markdown table created before.\n\n\n```python\n\nmr_note = real_mr.notes.create({'body': comment_body})\n\n```\n\n\nThis workflow creates a new MR comment every time a new commit is pushed.\nConsider evaluating the script and refining the update frequency by\nyourself. The script can be integrated into CI/CD with running kics before\ngenerating the reports shown in the following example configuration for\n`.gitlab-ci.yml`:\n\n\n```yaml\n\n# Full RAW example for kics reports and scans\n\nkics-scan:\n  image: python:3.10.2-slim-bullseye\n  variables:\n    # Visit for new releases\n    # https://github.com/Checkmarx/kics/releases\n    KICS_VERSION: \"1.5.1\"\n  script:\n    - echo $CI_PIPELINE_SOURCE\n    - echo $CI_COMMIT_REF_NAME\n    - echo $CI_MERGE_REQUEST_ID\n    - echo $CI_MERGE_REQUEST_IID\n    - apt-get update && apt-get install wget tar --no-install-recommends\n    - set -ex; wget -q -c \"https://github.com/Checkmarx/kics/releases/download/v${KICS_VERSION}/kics_${KICS_VERSION}_linux_x64.tar.gz\" -O - | tar -xz --directory /usr/bin &>/dev/null\n    # local requirements\n    - pip install -r requirements.txt\n    - kics scan --no-progress -q /usr/bin/assets/queries -p $(pwd) -o $(pwd) --report-formats json --output-path kics --output-name kics-report.json || true\n    - python ./integrations/kics-scan-report-mr-update.py\n```\n\n\nYou can find the [.gitlab-ci.yml\nconfiguration](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/blob/main/.gitlab-ci.yml)\nand the full script, including more inline comments and debug output [in\nthis\nproject](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning).\nYou can see the implementation MR testing itself in [this\ncomment](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/merge_requests/4#note_840472146).\n\n\n![MR comment with the kics report as Markdown\ntable](https://about.gitlab.com/images/blogimages/iac-security-scanning/kics-python-gitlab-mr-update-table.png){:\n.shadow}\n\n\n### MR comments using GitLab IaC SAST reports as source\n\n\nThe steps in the previous section show the raw `kics` command execution,\nincluding JSON report parsing that requires you to create your own parsing\nlogic. Alternatively, you can rely on the [IaC scanner in\nGitLab](https://docs.gitlab.com/ee/user/application_security/iac_scanning/#making-iac-analyzers-available-to-all-gitlab-tiers)\nand parse the SAST JSON report as [a standardized\nformat](https://docs.gitlab.com/ee/user/application_security/iac_scanning/#reports-json-format).\nThis is available for all GitLab tiers.\n\n\nDownload the [gl-sast-report.json\nexample](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/blob/main/example-reports/gl-sast-report-kics-iac.json),\nsave it as `gl-sast-report.json` in the same directory as the script, and\nparse the report in a similar way shown above.\n\n\n```python\n\nFILE=\"gl-sast-report.json\"\n\n\nf = open(FILE)\n\nreport = json.load(f)\n\n\n# Parse the report: kics\n\nif \"scan\" in report:\n    print(\"Found scanner '%s' in '%s'\" % (report[\"scan\"][\"scanner\"][\"name\"], FILE))\n    queries = report[\"vulnerabilities\"]\nelse:\n    raise Exception(\"Unsupported report format\")\n```\n\n\nThe parameters in the vulnerability report also include the CVE number. The\n`location` is using a nested dictionary and thus easier to parse.\n\n\n```python\n\ncomment_body = \"\"\"### IaC SAST vulnerabilities report\n\n\n| Severity | Description | Category | Location | CVE |\n\n|----------|-------------|----------|----------|-----|\n\n\"\"\"\n\n\nfor q in queries:\n    #print(q) # DEBUG\n    l = []\n    l.append(q[\"severity\"])\n    l.append(q[\"description\"])\n    l.append(q[\"category\"])\n    l.append(q[\"location\"][\"file\"])\n    l.append(q[\"cve\"])\n\n    comment_body += \"| \" + \" | \".join(l) + \" |\\n\"\n\nf.close()\n\n```\n\n\nThe `comment_body` contains the Markdown table, and can use the same code to\nupdate the MR with a comment using the GitLab API Python bindings. An\nexample run is shown in [this MR\ncomment](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/merge_requests/8#note_841940319).\n\n\nYou can integrate the script into your CI/CD workflows using the following\nsteps: 1) Override the `kics-iac-sast` job `artifacts` created by the\n`Security/SAST-IaC.latest.gitlab-ci.yml` template and 2) Add a job\n`iac-sast-parse` which parses the JSON report and calls the script to send a\nMR comment.\n\n\n```yaml\n\n# GitLab integration with SAST reports spec\n\ninclude:\n\n- template: Security/SAST-IaC.latest.gitlab-ci.yml\n\n\n# Override the SAST report artifacts\n\nkics-iac-sast:\n  artifacts:\n    name: sast\n    paths:\n      - gl-sast-report.json\n    reports:\n      sast: gl-sast-report.json\n\niac-sast-parse:\n  image: python:3.10.2-slim-bullseye\n  needs: ['kics-iac-sast']\n  script:\n    - echo \"Parsing gl-sast-report.json\"\n    - pip install -r requirements.txt\n    - python ./integrations/sast-iac-report-mr-update.py\n  artifacts:\n      paths:\n      - gl-sast-report.json\n```\n\n\nThe CI/CD pipeline testing itself can be found in [this MR\ncomment](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/merge_requests/9#note_841976761).\nPlease review the\n[sast-iac-report-mr-update.py](https://gitlab.com/gitlab-de/use-cases/infrastructure-as-code-scanning/-/blob/main/integrations/sast-iac-report-mr-update.py)\nscript and evaluate whether it is useful for your workflows.\n\n\n## What is the best integration strategy?\n\n\nOne way to evaluate the scanners is to look at their extensibility. For\nexample, [kics](https://docs.kics.io/latest/creating-queries/) calls them\n`queries`, [semgrep](https://semgrep.dev/docs/writing-rules/overview/) uses\n`rules`,\n[checkov](https://www.checkov.io/3.Custom%20Policies/Custom%20Policies%20Overview.html)\nsays `policies`,\n[tfsec](https://aquasecurity.github.io/tfsec/v1.1.5/getting-started/configuration/custom-checks/)\ngoes for `custom checks` as a name. These specifications allow you to create\nand contribute your own detection methods with extensive tutorial guides.\n\n\nMany of the shown scanners provide container images to use, or CI/CD\nintegration documentation. Make sure to include this requirement in your\nevaluation. For a fully integrated and tested solution, use the [IaC\nSecurity Scanning feature in\nGitLab](https://docs.gitlab.com/ee/user/application_security/iac_scanning/),\ncurrently based on the `kics` scanner. If you already have experience with\nother scanners, or prefer your own custom integration, evaluate the\nalternatives for your solution. All scanners discussed in this blog post\nprovide JSON as output format, which helps with programmatic parsing and\nautomation.\n\n\nMaybe you'd like to [contribute a new IaC\nscanner](https://docs.gitlab.com/ee/user/application_security/iac_scanning/#contribute-your-scanner)\nor help improve the detection rules and functionality from the open source\nscanners :-)\n\n\nCover image by [Sawyer Bengtson](https://unsplash.com/photos/tnv84LOjes4) on\n[Unsplash](https://unsplash.com)\n\n{: .note}\n",[855,9,685],{"slug":1047,"featured":6,"template":688},"fantastic-infrastructure-as-code-security-attacks-and-how-to-find-them","content:en-us:blog:fantastic-infrastructure-as-code-security-attacks-and-how-to-find-them.yml","Fantastic Infrastructure As Code Security Attacks And How To Find Them","en-us/blog/fantastic-infrastructure-as-code-security-attacks-and-how-to-find-them.yml","en-us/blog/fantastic-infrastructure-as-code-security-attacks-and-how-to-find-them",{"_path":1053,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1054,"content":1060,"config":1066,"_id":1068,"_type":13,"title":1069,"_source":15,"_file":1070,"_stem":1071,"_extension":18},"/en-us/blog/five-things-i-wish-i-knew-about-kubernetes",{"title":1055,"description":1056,"ogTitle":1055,"ogDescription":1056,"noIndex":6,"ogImage":1057,"ogUrl":1058,"ogSiteName":675,"ogType":676,"canonicalUrls":1058,"schema":1059},"5 things I wish I'd known about Kubernetes before I started","Looking to dive into Kubernetes? Here’s some advice on how to get started from a GitLab engineer.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749670146/Blog/Hero%20Images/containers-for-five-things-kubernetes-blog-post.jpg","https://about.gitlab.com/blog/five-things-i-wish-i-knew-about-kubernetes","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"5 things I wish I'd known about Kubernetes before I started\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Jason Plum\"}],\n        \"datePublished\": \"2018-04-16\",\n      }",{"title":1055,"description":1056,"authors":1061,"heroImage":1057,"date":1063,"body":1064,"category":683,"tags":1065},[1062],"Jason Plum","2018-04-16","\n\nI first encountered Kubernetes in January 2017 when our CEO [Sid Sijbrandij](/company/team/#sytses) challenged me and five other team members to get a live install functional on Kubernetes for an Idea to Production demo during the company summit in Cancún.\n\nPrior to the challenge I had never touched Kubernetes. Nonetheless, my team members and I conquered the challenge, completing the task a day before deadline to boot. You can [watch the demo here](#kubernetes-summit-challenge-demo).\n\nNow, a little more than a year later, I've taken a deeper dive into the container orchestration platform, leading my team in building and releasing the alpha version of the [cloud native GitLab helm chart](https://gitlab.com/charts/gitlab/blob/master/README.md), which allows for the deployment of GitLab on Kubernetes. With that experience fresh in mind, I've got a bit of advice for those looking to move into the world of Kubernetes:\n\n## The internet is your friend. Check out the documentation, online courses and walkthroughs.\n\nFirst things first, there are a couple of really good sets of documentation out there, and even a solid [course on edX](https://www.edx.org/course/introduction-to-kubernetes). These are all good choices. You don’t have to go through all of the courses to really get a running start with what’s going on. But if you want to get into the nitty-gritty, I would strongly suggest taking some of the courses. If all you want to do is see it work, be able to play with it and kind of get an idea of what it is, then you can get a [free trial](https://cloud.google.com/partners/partnercredit/?PCN=a0n60000006Vpz4AAC) with [GKE (Google Kubernetes Engine)](/blog/gke-gitlab-integration/), set up a little cluster and do a deployment that way. And if all you want to do is deploy a couple of your applications into the same cluster, we (GitLab) already have [Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/) that can hook everything together for you, and then you can use your entire workflow, do your deployments, and pop right in there. We’ll even help you spin up a GKE cluster with all the requirements [right from the UI](https://docs.gitlab.com/ee/user/project/clusters/#adding-and-creating-a-new-gke-cluster-via-gitlab).\n\nBut if you want to do it by hand the first time, that’s one of those things where you should start with the tutorial walkthroughs. Install the tools. They are all straightforward to get your hands on. Pull down one of the charts, try it, change some configuration options and retry it. Just play with it.\n\n## Be clear on how you will use Kubernetes.\n\nThe challenges you encounter in Kubernetes really depend on what you’re trying to do with it. Are you using it as a test round, are you using it as a staging environment, or are you going all the way in and going for production? Just using it for a development environment is not really complicated. You need to understand some basic concepts, like namespaces. You need to know what a secret is, what a configuration is, and what a deployment is. These core concepts will get you a very long way.\n\nBeyond that, you start getting into the involved steps. That’s where you need to understand what didn’t exist prior, like the role-based access controls, or RBAC, which is now involved with Kubernetes and also Helm. Those features did not exist a year ago, and now they do. They are becoming ever-present and even more involved. This is good for people doing production, engineers, SREs (site reliability engineers), deployments, customers, etc. because now you’re making sure that things aren’t touching other things they shouldn’t. It’s not an open, flat plane of network.\n\nNow you have fine-grained controls via RBAC. Multiple namespaces, with controls per namespace on access or creation to secrets and configuration. This allows you to have production-grade multi-tenant clusters where you are not concerned about neighbors stepping on each other or poking their nose where they don't belong. This is a big step compared to the state of Kubernetes as a whole in early 2017.\n\n> The thing I wish I knew was how fast it was going to develop. I walked into Kubernetes in January and then I walked away from it in February. When I came back to it in September, I was surprised by how much had changed. And then the same thing keeps happening every single release.\n\n## Don’t expect the same version on every service provider.\n\nI think the biggest thing that people should understand is that not all cloud providers provide the exact same version of Kubernetes. They’re all very close, they’re all almost identical, but the way in which certain features are implemented is slightly different. So, the way you get it on Azure’s container services and the way you get it on Amazon’s container services or GKE won't be exactly the same. Everybody’s implementation is slightly different. Perhaps the available version of the base functionality is going to be a little different, but the real difference will be between each of these providers' own product integrations.\n\nThen there’s the whole ‘roll your own’ approach, at which point you get to use really nifty plugins and other components that you can’t use out of the box with a cloud provider today. Play with it, but it still comes down to this: there are differences between the providers. Target mainline or vanilla, and it will work everywhere. Target a provider, and you’re now a part of that provider.\n\n## Be nimble. Change is constant, but don’t follow along blindly in an attempt to keep up.\n\nWow, there is just so much development. In the year from when I first touched Kubernetes to where I’m at now, the feature set has expanded quite a bit. And the controls that are required for large enterprises are now in place. These can bite you if you’re not paying attention, but they’re not horribly hard to understand if you’re willing to just take a moment and read. Also, everybody and their brother is now doing this and playing with this. Just because you see somebody else do it doesn’t mean it’s an industry best practice.\n\n## Last bit of sage advice: Seriously. DO NOT sleep on the releases.\n\nThe thing I wish I knew was how fast it was going to develop. I walked into Kubernetes in January and then I walked away from it in February. When I came back to it in September, I was surprised by how much had changed. And then the same thing keeps happening every single release.\n\nIt is a production-ready system. However, new feature sets and capabilities are evolving at such a pace that it can be hard to keep up with. You’re not breaking anything, but now there’s all these new, nifty features. All the shinies keep coming.\n\nThis is not a six-month release cycle software. I’m not going to install Kubernetes, walk away for a year and come back thinking I’ll simply be able to go to the next LTS (long-term support). You have to be present. You have to be paying attention. It doesn’t matter if you only check in once a month, you’ve got to check in once a month.\n\n",[9],{"slug":1067,"featured":6,"template":688},"five-things-i-wish-i-knew-about-kubernetes","content:en-us:blog:five-things-i-wish-i-knew-about-kubernetes.yml","Five Things I Wish I Knew About Kubernetes","en-us/blog/five-things-i-wish-i-knew-about-kubernetes.yml","en-us/blog/five-things-i-wish-i-knew-about-kubernetes",{"_path":1073,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1074,"content":1080,"config":1086,"_id":1088,"_type":13,"title":1089,"_source":15,"_file":1090,"_stem":1091,"_extension":18},"/en-us/blog/fluentd-using-gitlab-ci-cd",{"title":1075,"description":1076,"ogTitle":1075,"ogDescription":1076,"noIndex":6,"ogImage":1077,"ogUrl":1078,"ogSiteName":675,"ogType":676,"canonicalUrls":1078,"schema":1079},"Thanks Fluentd for betting on GitLab CI/CD!","We're happy to support fresh CNCF graduate Fluentd with GitLab CI/CD, and excited about their latest innovation offering stream processing on the edge.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678614/Blog/Hero%20Images/gitlab-fluentd.png","https://about.gitlab.com/blog/fluentd-using-gitlab-ci-cd","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Thanks Fluentd for betting on GitLab CI/CD!\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Priyanka Sharma\"}],\n        \"datePublished\": \"2019-05-21\",\n      }",{"title":1075,"description":1076,"authors":1081,"heroImage":1077,"date":1083,"body":1084,"category":876,"tags":1085},[1082],"Priyanka Sharma","2019-05-21","\nFluentd, the [latest project to graduate](https://www.fluentd.org/blog/fluentd-cncf-graduation) in the CNCF, announced on stage at KubeCon Barcelona today that it is using [GitLab CI/CD](/solutions/continuous-integration/) for continuous integration. We are thrilled about the shout out and honored to support such an influential and innovative project.\n\nFor those who haven’t yet worked with Fluentd, it is an [open source data collector](https://www.fluentd.org/architecture), which lets you unify the data collection and consumption for a better use and understanding of data. Fluent Bit is their lighter-weight forwarder for those with exacting memory requirements. The project sports 7,868 stars on GitHub and their community has contributed more than 900 contributed plugins. They witness more than 100K downloads a day!\n\nThe latest innovation from Fluentd around [stream processing on the edge](https://docs.fluentbit.io/stream-processing/) can be very useful for our industry. As many of those who monitor large-scale, complex, distributed systems, run IoT businesses, or build smart cities will attest, more and more data is generated by these systems and analysis often needs to happen blazingly fast to be meaningful. The standard data analysis model, where it is first stored and indexed in a database (presumably in some cloud) and then analyzed, is not good enough for some real-time and complex analysis needs. The latencies associated with such data transfer may not be able to support applications involving time-critical, data-driven decision making. With Fluent bit, the Fluent team is looking to process the data while it's still in motion in the Log processor – bringing a lot of advantages of speed.\n\nWhile I am reading papers by others attempting to build stream processing on the edge, I find Fluentd’s efforts exciting because they already have major community traction and are part of companies’ observability workflows for logging. The [CNCF graduation criteria](https://github.com/cncf/toc/blob/master/process/graduation_criteria.adoc) that Fluentd met will further embolden enterprises to try it out, as part of the requirements are a diverse contributor community and security audits.\n\nWe've spent the past few months collaborating with Fluentd on their CI needs, and it's been very educational for us. We learned about the unique challenges that fast-moving projects in the CNCF face, and how we can be of assistance with our CI/CD offering. A large part of the answer is providing clear and consistent guidance around converting pipelines and then supporting the projects to success. If you are a CNCF project interested in working with GitLab CI/CD, holler at us and we’d be delighted to help.\n\nUntil then, enjoy KubeCon Barca!\n",[108,835,923,727,278,9],{"slug":1087,"featured":6,"template":688},"fluentd-using-gitlab-ci-cd","content:en-us:blog:fluentd-using-gitlab-ci-cd.yml","Fluentd Using Gitlab Ci Cd","en-us/blog/fluentd-using-gitlab-ci-cd.yml","en-us/blog/fluentd-using-gitlab-ci-cd",{"_path":1093,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1094,"content":1100,"config":1107,"_id":1109,"_type":13,"title":1110,"_source":15,"_file":1111,"_stem":1112,"_extension":18},"/en-us/blog/from-idea-to-production-on-thousands-of-clouds",{"title":1095,"description":1096,"ogTitle":1095,"ogDescription":1096,"noIndex":6,"ogImage":1097,"ogUrl":1098,"ogSiteName":675,"ogType":676,"canonicalUrls":1098,"schema":1099},"From idea to production on thousands of clouds","Deliver cloud native applications in more places consistently at scale with GitLab and Gravity.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749679266/Blog/Hero%20Images/blue-lights.jpg","https://about.gitlab.com/blog/from-idea-to-production-on-thousands-of-clouds","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"From idea to production on thousands of clouds\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Ev Kontsevoy\"}],\n        \"datePublished\": \"2019-11-20\",\n      }",{"title":1095,"description":1096,"authors":1101,"heroImage":1097,"date":1103,"body":1104,"category":300,"tags":1105},[1102],"Ev Kontsevoy","2019-11-20","\nToday, deploying an application with GitLab is easier than ever: just create a Kubernetes cluster on your cloud of choice, connect it to GitLab with the Kubernetes integration, and Auto DevOps creates a full deployment pipeline for you.\n\nBut what if you need your app to run in two clusters in two separate regions? Ten clusters across multiple cloud providers? A hundred clusters and also on a fleet of self-driving trucks?\n\nAt [Gravitational](https://gravitational.com), we believe the future should not belong to a single cloud provider and developers should be able to run their applications anywhere with the same simplicity as having a single Kubernetes cluster.\n\nI am a huge fan of GitLab. I’ve had the great pleasure of getting to know much of the founding team [over the years](https://about.gitlab.com/blog/gitlab-joins-forces-with-gravitational/) and was happy to provide my [own contribution](https://gitlab.com/gitlab-org/gitlab-foss/issues/22864) to the community a while back. Today, I’m happy to share some thoughts on how to build with GitLab and deploy applications into dozens or even hundreds of cloud environments. \n\n## The rise of multicloud\n\nHow do you run applications in different data centers? Do you need to rewrite them from scratch? AWS may still be the dominant cloud provider, but cloud competitors are eating into their lead. It’s not just the big public cloud companies either. [Private cloud data centers](https://www.forbes.com/sites/jasonbloomberg/2019/02/02/have-private-clouds-finally-found-their-place-in-the-enterprise/#2f859685604f) are growing just as rapidly.\n\nMany companies that need to meet tough security and compliance requirements will require applications to run in their bare metal data centers. Running an application on an on-premises or even air-gapped data center adds additional complexity due to the hundreds or even thousands of dependencies in modern applications.\n\nGravitational has built Gravity, an open source [Kubernetes packaging solution ](https://gravitational.com/gravity/)that allows developers to build “cluster images” (similar to VM images) that can contain an entire Kubernetes cluster pre-loaded with multiple applications. You would use GitLab to go from idea to production, and Gravity to expand your production to anywhere in the world. \n\nStatements like, “I have snapshotted our entire production environment and emailed it to you, so you can run it in your private data center,” will not seem completely crazy.\n\nGravity uses standard, upstream CNCF-supported tooling for creating \"images\" of Kubernetes clusters containing the applications and their dependencies. The resulting files are called cluster images which are just .tar files.\n\nA cluster image can be used to recreate full replicas of the original environments for any deployment environment where compliance and consistency matter, i.e. in locked-down AWS/GCP/Azure environments or even in air-gapped server rooms. Each image includes all dependencies to spin up a full cluster, as well as the Gravity daemon that handles the most common operational tasks associated with Kubernetes applications, and it monitors and alerts human operators of problems.\n\n## Deploy with GitLab, scale with Gravity\n\n![Gravity dashboard](https://about.gitlab.com/images/blogimages/gravity-dashboard.png)\n\nDevelopers can leverage a GitLab repository as a single source of truth for rolling out a Kubernetes app and leverage [GitLab CI/CD](https://docs.gitlab.com/ee/ci/) for continuous delivery.\n\nAny project of meaningful scale begins by defining an [epic](https://docs.gitlab.com/ee/user/group/epics/) with goals, milestones, and tasks. An [issue](https://docs.gitlab.com/ee/user/project/issues/#issues) is the main object for collaborating on ideas and planning work. GitLab’s [package and container registry](https://about.gitlab.com/stages-devops-lifecycle/package/) helps you manage and package dependencies. \n\n[The GitLab Kubernetes integration](https://docs.gitlab.com/ee/user/project/clusters/) allows customers to create Kubernetes clusters, utilize review apps, run pipelines, use web terminals, deploy apps, view pod logs, detect and monitor Kubernetes, and much more. For deploying a Kubernetes cluster in a single destination, GitLab provides everything you need from start to finish. \n\nHowever, if your customers need to run your application in their private data centers, they can use Gravity, which essentially copy/pastes the entire Kubernetes cluster environment you’ve built in GitLab. \n\n[Download](https://gravitational.com/gravity/download/) and set up the Gravity open source edition following our [quickstart guide](https://gravitational.com/gravity/docs/quickstart/). From Gravity, you can build a cluster image of your Kubernetes application. Gravity’s [documentation](https://gravitational.com/gravity/docs/overview/) will walk you through the steps required to build an image manifest that describes the image build, the installation process, and the system requirements for the cluster. \n\nYou can build empty Kubernetes cluster images to quickly create a large number of identical, production-ready Kubernetes clusters within an organization, or you can build a cluster image that also includes Kubernetes applications to distribute your application to third parties. \n\n## Next steps\n\nIf you want to learn more about working with Kubernetes, start with [Kubernetes 101](https://www.youtube.com/watch?v=rq4GZ_GybN8). You’ll learn how GitLab and Kubernetes interact at various touchpoints. And, if you’re looking for a way to port your applications to new environments, check out [Gravity](https://gravitational.com/gravity). \n\n## About the guest author\n\nEv is a co-founder and the CEO of Gravitational. Before Gravitational, he launched the on-demand OpenCompute servers at Rackspace. Prior to Rackspace, he co-founded Mailgun, the first email service built for developers. Ev has been a fighter against unnecessary complexity in software for 20 years. He abhors cars but loves trains and open source software that doesn't require an army of consultants to operate.\n\n## About Gravitational\n\n[Gravitational](https://gravitational.com) helps companies deliver cloud applications across cloud providers, on-premises environments, and even air-gapped server rooms. Products include Teleport for multi-cloud privileged access management that doesn't get in the way of developer productivity, and Gravity, a Kubernetes packaging solution that takes the drama out of on-prem deployments. Gravitational was founded in 2015 and recently [announced their Series A](https://gravitational.com/blog/gravitational-series-a-funding/). \n\nCover image by [Sharon McCutcheon](https://unsplash.com/@sharonmccutcheon) on [Unsplash](https://unsplash.com/photos/TMwHpCrU8D4)\n",[727,685,232,9,108,1106],"startups",{"slug":1108,"featured":6,"template":688},"from-idea-to-production-on-thousands-of-clouds","content:en-us:blog:from-idea-to-production-on-thousands-of-clouds.yml","From Idea To Production On Thousands Of Clouds","en-us/blog/from-idea-to-production-on-thousands-of-clouds.yml","en-us/blog/from-idea-to-production-on-thousands-of-clouds",{"_path":1114,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1115,"content":1121,"config":1129,"_id":1131,"_type":13,"title":1132,"_source":15,"_file":1133,"_stem":1134,"_extension":18},"/en-us/blog/from-monolith-to-microservices-how-to-leverage-aws-with-gitlab",{"title":1116,"description":1117,"ogTitle":1116,"ogDescription":1117,"noIndex":6,"ogImage":1118,"ogUrl":1119,"ogSiteName":675,"ogType":676,"canonicalUrls":1119,"schema":1120},"From monolith to microservices: How to leverage AWS with GitLab","GitLab recently spent time with Ask Media Group and AWS to discuss how modernizing from self-managed to a cloud native system empowers developers.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749679645/Blog/Hero%20Images/askmediablog-.jpg","https://about.gitlab.com/blog/from-monolith-to-microservices-how-to-leverage-aws-with-gitlab","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"From monolith to microservices: How to leverage AWS with GitLab\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Brein Matturro\"}],\n        \"datePublished\": \"2020-03-24\",\n      }",{"title":1116,"description":1117,"authors":1122,"heroImage":1118,"date":1124,"body":1125,"category":876,"tags":1126},[1123],"Brein Matturro","2020-03-24","\n\nAsk Media Group operates over 30 websites and provides enriched search results, articles, galleries, and shopping sites to over 100 million unique visitors each month. About two years ago, [Ask Media](https://www.askmediagroup.com/) was looking for ways to grow the business, draw advertisers, and expand its audience. Routine tasks like onboarding developers or releasing software took too long. The monolithic system that was in place had limited capabilities and added financial burdens for services that went unused. \n\nChenglim Ear, principal software engineer at Ask Media, recently sat down with [Trevor Hansen](https://www.linkedin.com/in/startuptrev), solutions architect at AWS, to discuss how adopting GitLab empowered developers to improve the customer experience, release software quicker, and leverage AWS cloud services. \n \n## Building microservices from monoliths\n\nAsk Media was looking to move away from a monolithic system to [microservices](/topics/microservices/) in order to modernize workflow and improve the overall business process. “We wanted to move over to microservices. We wanted to [leverage Kubernetes](/solutions/kubernetes/). It was a new container world that was shaping. When we looked at GitLab, it was very complete in providing what we needed to be able to build images, to run on containers,” according to Chenglim. “That was a very big deciding factor. GitLab had everything that we needed.” \n\nDevelopers can now break services into multiples and develop them independently, own the code, and have full visibility prior to deployment. “We're making the hidden logic transparent and we enable the parts of the logic to be independently developed in parallel. So you can have developers all working on their own, with different skillsets,” Chenglim says. \n\n## Containers, cost, and scalability\n\n“We needed a system that could handle change. When we look at what we did to speed up development, make it simple and transparent, and control the cost, we see a paradigm shift. GitLab gave us push-button releases. Docker and Kubernetes enabled us to switch to a microservices architecture and AWS enabled auto scaling,” says Chenglim. “On Amazon, we started building Kubernetes clusters and GitLab became our command and control interface.” \n \n Ask Media was looking for a tool that could scale and grow as needed. Cost, speed, and functionality are the tenets that AWS focuses on providing to its customers, according to Hansen. AWS works closely with Ask Media to ensure that the containers in place offer the scalability, flexibility, and timeliness they need. \n\nWith [GitLab and AWS](/partners/technology-partners/aws/), Ask Media developers built out a platform that enables the knowledge from all members of the teams. “With AWS, we wanted a product that was fairly complete and mature. AWS has a lot of history and lots of services. We definitely wanted to be able to leverage those services and to build on a platform that was a solid,” Chenglim says. “We set off to build Kubernetes clusters, right on EC2 instances. We continue to look at opportunities to leverage the resources available through AWS.”\n\nTo learn more about how Ask Media made the transition to cloud native, check out the full [webcast](/webcast/cloud-native-transformation/).\n\nCover image by [Eric Muhr](https://unsplash.com/@ericmuhr?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com)\n{: .note}",[1127,707,9,1128],"webcast","UI",{"slug":1130,"featured":6,"template":688},"from-monolith-to-microservices-how-to-leverage-aws-with-gitlab","content:en-us:blog:from-monolith-to-microservices-how-to-leverage-aws-with-gitlab.yml","From Monolith To Microservices How To Leverage Aws With Gitlab","en-us/blog/from-monolith-to-microservices-how-to-leverage-aws-with-gitlab.yml","en-us/blog/from-monolith-to-microservices-how-to-leverage-aws-with-gitlab",{"_path":1136,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1137,"content":1143,"config":1151,"_id":1153,"_type":13,"title":1154,"_source":15,"_file":1155,"_stem":1156,"_extension":18},"/en-us/blog/gcp-move-update",{"title":1138,"description":1139,"ogTitle":1138,"ogDescription":1139,"noIndex":6,"ogImage":1140,"ogUrl":1141,"ogSiteName":675,"ogType":676,"canonicalUrls":1141,"schema":1142},"Update on our planned move from Azure to Google Cloud Platform","GitLab.com is migrating to Google Cloud Platform August 11 – here’s what this means for you now and in the future.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749671280/Blog/Hero%20Images/gitlab-gke-integration-cover.png","https://about.gitlab.com/blog/gcp-move-update","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Update on our planned move from Azure to Google Cloud Platform\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"David Smith\"}],\n        \"datePublished\": \"2018-07-19\",\n      }",{"title":1138,"description":1139,"authors":1144,"heroImage":1140,"date":1146,"body":1147,"category":683,"tags":1148},[1145],"David Smith","2018-07-19","\n\nNOTE to users in Crimea, Cuba, Iran, North Korea, Sudan, and Syria: GitLab.com may\nnot be accessible after the migration to Google. Google has informed us that\nthere are legal restrictions that are imposed for those countries. See this\n[U.S. Department of the Treasury link](http://www.treasury.gov/resource-center/sanctions/Programs/Pages/Programs.aspx)\nfor more details. At this time, we can only recommend that you download\nyour code or export relevant projects as a backup. See [this issue](https://gitlab.com/gitlab-com/migration/issues/649)\nfor more discussion.\n{: .alert .alert-warning}\n\nUpdate as of August 1: There will be a short maintenance window on Saturday, August 4 at 13:00 UTC. We will perform a test of approximately 1 hour.  This will help us verify some of our fixes to make sure the switchover goes as planned.\n{: .alert .alert-info}\n\nUpdate as of July 27: There will be a short maintenance window on Saturday, July 28 at 13:00 UTC. We will perform a short test of approximately 5 minutes.  This will help us verify some of our fixes to make sure our Chef runs work correctly with GitLab.com inaccessible.\n{: .alert .alert-info}\n\nUpdate as of July 24: Following our dry run of the migration on Saturday, July 21, we have rescheduled the migration with a new target date of Saturday, August 11. You can read through [our findings document](https://docs.google.com/document/d/1Y7Cv4BHmHw8djtDBex8opUGs8t0wWmgrueaCocKfYxs/edit?usp=sharing) for all the details.\n{: .alert .alert-info}\n\nImproving the performance and reliability of [GitLab.com](/pricing/)  has been a top priority for us. On this front we've made some incremental gains while we've been planning for a large change with the potential to net significant results: running GitLab as a [cloud native](/topics/cloud-native/) application on Kubernetes.\n\nThe next incremental step on our cloud native journey is a big one: migrating from Azure to Google Cloud Platform (GCP). While Azure has been a great provider for us, GCP has the best Kubernetes support and we believe will the best provider for our long-term plans. In the short term, our users will see some immediate benefits once we cut over from Azure to GCP including encrypted data at rest on by default and faster caching due to GCP's tight integration with our existing CDN.\n\n## Upcoming maintenance windows for the GCP migration\n\nAs an update to [our earlier blog post on the migration](/blog/moving-to-gcp/), this is a short post to let our community know we are planning on performing the migration of GitLab.com the weekend of ~~July 28~~ August 11 (this has been rescheduled following our dry run on July 21). We have a maintenance window coming up that we would like to make sure everybody knows about.\n\n### What you need to know:\n\nDuring the maintenance windows, the following services will be unavailable:\n\n* SaaS website ([GitLab.com](https://gitlab.com/) will be offline, but [about.gitlab.com](https://about.gitlab.com/) and [docs.gitlab.com](https://docs.gitlab.com/) will still be available)\n* Git ssh\n* Git https\n* registry\n* CI/CD\n* Pages\n\n### Maintenance window - Dry run - Saturday, July 21 at 13:00 UTC\n\nAs a further update to our testing, we are planning to take a short maintenance window this weekend on Saturday, July 21 at 13:00 UTC to do final readiness checks.\nThis maintenance window should last one hour.\n\n2018-07-23 UDPATE: Here are the [finding from the maintenance window](https://docs.google.com/document/d/1Y7Cv4BHmHw8djtDBex8opUGs8t0wWmgrueaCocKfYxs/edit). We've decided to push our target date from July 28th to August 11th to comfortably address several issues. We will likely do a small maintenance window on Saturday, July 28th, and another full practice on Saturday, August 4th.\n\n### Maintenance window - Short test - Saturday, July 28 at 13:00 UTC\n\nWe will perform a short test of approximately 5 minutes.  This will help us verify some of our fixes to make sure our Chef runs work correctly with GitLab.com inaccessible.\n\n\n### Maintenance window - Dry run - Saturday, August 4 at 13:00 UTC\n\nWe will repeat the dry run exercise again to have a chance to verify our changes to the switchover plan.\n\n\n### Maintenance window - Actual switchover - Saturday, ~~July 28~~ August 11 at 10:00 UTC\n\nOn the day of the migration, we are planning to start at 10:00 UTC.  The time window for GitLab.com to be in maintenance is currently planned to be two hours.  Should any times for this change, we will be updating on the channels listed below. When this window is completed GitLab.com will be running out of GCP.\n\n* [GitLab Status page](https://status.gitlab.com/)\n* [GitLab Status Twitter](https://twitter.com/gitlabstatus)\n\n### GitLab Pages and custom domains\n\nIf you have a custom domain on [GitLab Pages](https://about.gitlab.comhttps://docs.gitlab.com/ee/user/project/pages/):\n\n* We will have a proxy in place so you do not have to change your DNS immediately.\n* GitLab Pages will ultimately go to 35.185.44.232 after the July 28 migration.\n* Do not change your DNS to this new address until we have successfully completed the migration.\n* We will post an update to our blog about when the cutoff will be for changing DNS from our Azure address to GCP for GitLab Pages.\n\nShould you need support during the migration, please reach out to [GitLab Support](https://about.gitlab.com/support/).\n\nWish us luck!\n",[1149,727,1150,9],"google","GKE",{"slug":1152,"featured":6,"template":688},"gcp-move-update","content:en-us:blog:gcp-move-update.yml","Gcp Move Update","en-us/blog/gcp-move-update.yml","en-us/blog/gcp-move-update",{"_path":1158,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1159,"content":1164,"config":1170,"_id":1172,"_type":13,"title":1173,"_source":15,"_file":1174,"_stem":1175,"_extension":18},"/en-us/blog/getting-started-gitlab-ci-gcp",{"title":1160,"description":1161,"ogTitle":1160,"ogDescription":1161,"noIndex":6,"ogImage":1140,"ogUrl":1162,"ogSiteName":675,"ogType":676,"canonicalUrls":1162,"schema":1163},"Getting started with GitLab CI/CD and Google Cloud Platform","Discover how easy it is to set up CI/CD and Kubernetes deployment with our integration with Google Kubernetes Engine.","https://about.gitlab.com/blog/getting-started-gitlab-ci-gcp","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Getting started with GitLab CI/CD and Google Cloud Platform\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"GitLab\"}],\n        \"datePublished\": \"2018-04-24\",\n      }",{"title":1160,"description":1161,"authors":1165,"heroImage":1140,"date":1167,"body":1168,"category":683,"tags":1169},[1166],"GitLab","2018-04-24","\n\nEarlier this month [we announced our new native integration with Google Kubernetes Engine (GKE)](/blog/gke-gitlab-integration/),\nallowing you to [set up CI/CD](/topics/ci-cd/) and Kubernetes deployment in just a few clicks. If you're new to\nGitLab CI on Google Cloud Platform (GCP), we've put together a quick [demo](#demo) and [instructions](#instructions) you can view below. For a more detailed walkthrough and the chance to ask questions, join us on April 26 for a [live demo](#join-google-and-gitlab-for-a-live-demo).\n\n## Demo\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/u3jFf3tTtMk\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\n## Instructions\n\n### Add a Kubernetes Engine cluster\n\nHead on over to the CI/CD -> Kubernetes menu option in the GitLab UI. Here you can add your existing cluster to your project or create a brand new one.\n\n![Add your Kubernetes cluster](https://about.gitlab.com/images/blogimages/gitlab-ci-gcp/step1.png){: .shadow.center.medium}\n\nOnce connected, you can install applications like [Helm Tiller](https://helm.sh/), [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/), [Prometheus](https://docs.gitlab.com/ee/administration/monitoring/prometheus/), and [GitLab Runner](https://docs.gitlab.com/ee/ci/runners/) to your cluster with just one click.\n\n![Install applications](https://about.gitlab.com/images/blogimages/gitlab-ci-gcp/install-applications.png){: .shadow.center.medium}\n\n### Enable Auto DevOps\n\nWe've also worked with Google to integrate [GitLab Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/) with GKE. Using them together, you'll have a continuous deployment pipeline that automatically creates a [review app](https://docs.gitlab.com/ee/ci/review_apps/) for each merge request and once you merge, deploys the application into production on production-ready GKE.\n\nTo get started, go to CI/CD -> General pipeline settings, and select “Enable Auto DevOps.” For more information, read the [Auto DevOps docs](https://docs.gitlab.com/ee/topics/autodevops/).\n\n![Enable Auto DevOps](https://about.gitlab.com/images/blogimages/gitlab-ci-gcp/step2.png){: .shadow.center.medium}\n\nAuto DevOps takes the manual work out of CI/CD by automatically detecting what languages you’re using, and configuring a continuous integration and continuous deployment pipeline that results in your app running live on the Kubernetes Engine cluster.\n\n![Review pipeline](https://about.gitlab.com/images/blogimages/gitlab-ci-gcp/step3.png){: .shadow.center.medium}\n\nNow, whenever you create a merge request, we'll run a review pipeline to deploy a review app to your cluster where you can preview your changes. When you merge the code, GitLab will run a production pipeline to deploy your app to production, running on Kubernetes Engine!\n\n## Get $500 credit for your project\n\nEvery new Google Cloud Platform account receives $300 in credit [upon signup](https://console.cloud.google.com/freetrial?utm_campaign=2018_cpanel&utm_source=gitlab&utm_medium=referral). In partnership with Google, we're offering an additional $200 for both new and existing GCP accounts to get started with the GKE integration. Here's a link to [apply for your $200 credit](https://goo.gl/AaJzRW).\n\n## Join Google and GitLab for a live demo\n\nJoin Google’s [William Denniss](https://www.linkedin.com/in/williamdenniss/) and GitLab’s [William Chia](https://www.linkedin.com/in/williamchia/) for a walkthrough of the integration on April 26. You’ll learn how easy it is to set up a Kubernetes cluster, how to deploy your app using GitLab CI/CD, and how GKE enables you to deploy, update, and manage containerized applications at scale.\n\n[Register today](/webcast/scalable-app-deploy/)!\n",[1149,1150,9,923],{"slug":1171,"featured":6,"template":688},"getting-started-gitlab-ci-gcp","content:en-us:blog:getting-started-gitlab-ci-gcp.yml","Getting Started Gitlab Ci Gcp","en-us/blog/getting-started-gitlab-ci-gcp.yml","en-us/blog/getting-started-gitlab-ci-gcp",{"_path":1177,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1178,"content":1184,"config":1191,"_id":1193,"_type":13,"title":1194,"_source":15,"_file":1195,"_stem":1196,"_extension":18},"/en-us/blog/gitlab-achieves-kcsp-status",{"title":1179,"description":1180,"ogTitle":1179,"ogDescription":1180,"noIndex":6,"ogImage":1181,"ogUrl":1182,"ogSiteName":675,"ogType":676,"canonicalUrls":1182,"schema":1183},"GitLab achieves CNCF Kubernetes certified provider status","GitLab is all-in on cloud native and now that we're CNCF Certified Service Providers we'll be able to help other companies do the same.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681517/Blog/Hero%20Images/kubernetes-certified-service-provider-blog-cover.png","https://about.gitlab.com/blog/gitlab-achieves-kcsp-status","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"It's official: GitLab has achieved CNCF Kubernetes Certified Provider status\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Vick Kelkar\"}],\n        \"datePublished\": \"2020-08-24\",\n      }",{"title":1185,"description":1180,"authors":1186,"heroImage":1181,"date":1188,"body":1189,"category":1004,"tags":1190},"It's official: GitLab has achieved CNCF Kubernetes Certified Provider status",[1187],"Vick Kelkar","2020-08-24","\n\nGitLab is pleased to announce that we are now a Kubernetes Certified Service Provider (KCSP). KCSP is run by the Cloud Native Computing Foundation (CNCF) in collaboration with the Linux Foundation. The intention behind the KCSP program is to ensure that enterprises get the support they need to roll out applications to production Kubernetes environments. GitLab, through its KCSP status, wants to help organizations to adopt a [cloud native](/topics/cloud-native/) approach for their business objectives.\n\n## Container and Kubernetes Adoption\n\nA recent [CNCF report](https://www.cncf.io/wp-content/uploads/2020/03/CNCF_Survey_Report.pdf) shows that the use of containers in production has jumped from 23% in 2016 to 84% in 2019. According to another [CNCF survey](https://www.cncf.io/blog/2019-cncf-survey-results-are-here-deployments-are-growing-in-size-and-speed-as-cloud-native-adoption-becomes-mainstream/), cloud native technologies have become mainstream and many [CNCF projects](https://cncf.ci/) have adopted GitLab for their project needs. Kubernetes has emerged as the orchestrator of choice for organizations embarking on cloud native initiatives. Kubernetes helps organizations achieve container operational efficiencies and make developer interactions easier with strong API support. A recent [survey of IT professionals](https://blogs.vmware.com/cloudnative/2020/03/11/why-large-organizations-trust-kubernetes/) working at organizations with 1,000 or more employees found that over 50% are running Kubernetes in a production environment. This is creating demand for people who understand how to migrate, deploy, and run containerized applications in a cloud native manner.\n\n## Benefits of GitLab achieving KCSP\n\nAccording to a [451 Research report](https://clients.451research.com/reportaction/98250/Toc), even as the adoption of Kubernetes gains traction in the enterprise and [DevOps](/topics/devops/) personnel leverage Kubeneretes to automate tasks, there is still a skills gap around container administration and orchestration. GitLab, as a KCSP, can provide consulting, training, support, workshops, and professional services to enterprises looking to embrace the Kubernetes cloud native approach. A [survey](https://www.cncf.io/blog/introducing-the-cncf-technology-radar/) conducted by [CNCF End User Community](https://www.cncf.io/people/end-user-community/) shows that enterprise customers were willing to try out GitLab in their production environments. GitLab offers advice to enterprise users who want to run their applications on a container scheduler like Kubernetes. As [the CNCF CTO pointed out](https://www.patreon.com/posts/open-source-is-28808432), GitLab has an open core business model and the roadmaps are public. This allows our customers and community to contribute features back into the GitLab project. GitLab can provide guidance on GitOps, DevOps and DevSecOps approaches to organizations adopting Kubernetes.  Achieving KCSP status allows us to offer trusted advice to our customers and to help enterprises adopt Kubernetes for production workloads.\n\n## What’s next\n\nGitLab, being an open-source minded company, is committed to the success of Kubernetes as an open-source technology. Kubernetes is seeing wide adoption in the industry for scaling and management of containerized workloads. GitLab can help deliver workloads securely onto a Kubernetes cluster. You can run GitLab on Kubernetes using our [helm charts](https://docs.gitlab.com/charts/) as well. Achieving the KCSP milestone shows GitLab’s commitment to grow and support the Kubernetes project and the CNCF community.  \n\nTo learn more about the KCSP program and CNCF program, visit their respective websites at [KCSP](https://www.cncf.io/certification/kcsp/) and [CNCF](https://www.cncf.io/). GitLab believes in a world where everyone can contribute. Open source organizations can learn more about [GitLab for Open Source](/solutions/open-source/). You can learn more about GitLab's Kubernetes partners [here](/resources/downloads/gitlab-partnership-roadmap.pdf).\n",[9,727],{"slug":1192,"featured":6,"template":688},"gitlab-achieves-kcsp-status","content:en-us:blog:gitlab-achieves-kcsp-status.yml","Gitlab Achieves Kcsp Status","en-us/blog/gitlab-achieves-kcsp-status.yml","en-us/blog/gitlab-achieves-kcsp-status",{"_path":1198,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1199,"content":1205,"config":1210,"_id":1212,"_type":13,"title":1213,"_source":15,"_file":1214,"_stem":1215,"_extension":18},"/en-us/blog/gitlab-and-redhat-automation",{"title":1200,"description":1201,"ogTitle":1200,"ogDescription":1201,"noIndex":6,"ogImage":1202,"ogUrl":1203,"ogSiteName":675,"ogType":676,"canonicalUrls":1203,"schema":1204},"GitLab and Red Hat: Automation to enhance secure software development","How our closer relationship with Red Hat will boost deployment automation.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749666262/Blog/Hero%20Images/default-blog-image.png","https://about.gitlab.com/blog/gitlab-and-redhat-automation","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab and Red Hat: Automation to enhance secure software development\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Vick Kelkar\"}],\n        \"datePublished\": \"2020-04-29\",\n      }",{"title":1200,"description":1201,"authors":1206,"heroImage":1202,"date":1207,"body":1208,"category":1004,"tags":1209},[1187],"2020-04-29","\n\nWe're working towards a closer relationship with Red Hat and we're excited about the possibilities. We think developers can reduce time spent coding while still increase productivity with technologies from GitLab and Red Hat. Here's what you need to know.\n\n### Why GitLab?\n\nGitLab enables both the developers and operations teams to apply [DevOps](/topics/devops/) practices using a single application. Using one tool for the entire application’s lifecycle, i.e. right from development and deployment to operations, allows the organization to achieve operational efficiency and reduce deployment cycle times.\n\nGitLab not only provides source code management ([SCM](/solutions/source-code-management/)) but it also offers CI/CD to make streamlined deployments to a container platform like Red Hat OpenShift while maintaining visibility into the deployment pipelines. Furthermore, with [AutoDevOps](https://docs.gitlab.com/ee/topics/autodevops/), the GitLab application also addresses the organization’s security requirements through scanning and dependency mapping for the developed application. The ability to check the license of software being used, before deploying it in a production environment, helps organizations reduce their [compliance risks](/solutions/compliance/).\n\n### Why GitLab with Red Hat?\n\nRed Hat has a number of technologies in its portfolio. At the core is Red Hat Enterprise Linux ([RHEL](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux)), an enterprise-grade Linux operating system (OS) platform used by many Fortune 500 companies that can be deployed across the hybrid cloud, from bare-metal and virtual servers to private and public cloud environments. RHEL makes it easier for the operations team to manage the upgrades, security patches and life cycles of servers being used to run applications like GitLab. Red Hat also provides the industry’s most comprehensive enterprise Kubernetes platform in Red Hat OpenShift. OpenShift is uniquely positioned to run a containerized application on a public or private cloud.\n\nGitLab can accelerate software development and deployment of applications while RHEL can act as the more secure, fully managed OS that can scale with the application. The inclusion of new DevOps tools in Red Hat’s hybrid cloud technologies like [service mesh](https://www.openshift.com/blog/red-hat-openshift-service-mesh-is-now-available-what-you-should-know) empowers developers to iterate faster on a foundation of trusted enterprise Linux.\n\nThe GitLab solution, which includes [CI/CD workflow](/topics/ci-cd/), an AutoDevOps workflow, a container registry, and Kubernetes integration can be deployed on RHEL using [install](/install/) instructions and you can find out more about GitLab SaaS pricing model [here](/pricing/#gitlab-com). You can read our sales [FAQ](/sales/#faq) or contact our [sales team](/sales/) if you have questions about the offering.\n\nGitLab can be deployed on RHEL-based machines to provide organizations with DevOps infrastructure and collaboration tools. Our collaboration with Red Hat doesn't stop as a supported platform for the GitLab Server but Red Hat OpenShift can also be a target for our CI/CD and Auto DevOps workflows. Application container images can be pushed to our registry and used to deploy applications into Red Hat OpenShift.\n\n### What’s Next?\n\nAs GitLab and Red Hat increase their collaboration, we plan to announce the availability of GitLab Runner Operator for OpenShift in the near future. At GitLab, we have an [engineering epic](https://gitlab.com/groups/gitlab-org/-/epics/2068) underway to develop first-class support for OpenShift.\n\nWith the upcoming product integrations with Red Hat, GitLab is striving to increase collaboration in the organization, increase developer velocity and reduce friction between teams, regardless of the deployment models of VMs or containers. The overarching goal is to help organizations improve their [DevSecOps](/solutions/security-compliance/) posture while significantly reducing security and compliance risks.\n\n### Resources\n\n- [GitOps:The Future of Infrastructure Automation - A panel discussion with Weaveworks, HashiCorp, Red Hat, and GitLab](https://about.gitlab.com/why/gitops-infrastructure-automation/)\n- [RHEL 8 Install documentation](https://about.gitlab.com/install/#centos-8)\n- [and RHEL 7 Install documentation](https://about.gitlab.com/install/#centos-7)\n- [GitLab on Microsoft Azure](https://docs.gitlab.com/ee/install/azure/)\n- [Try OpenShift](https://www.openshift.com/try)\n",[901,108,727,685,9],{"slug":1211,"featured":6,"template":688},"gitlab-and-redhat-automation","content:en-us:blog:gitlab-and-redhat-automation.yml","Gitlab And Redhat Automation","en-us/blog/gitlab-and-redhat-automation.yml","en-us/blog/gitlab-and-redhat-automation",{"_path":1217,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1218,"content":1224,"config":1229,"_id":1231,"_type":13,"title":1232,"_source":15,"_file":1233,"_stem":1234,"_extension":18},"/en-us/blog/gitlab-and-workloads-on-ibm-z-and-red-hat-openshift",{"title":1219,"description":1220,"ogTitle":1219,"ogDescription":1220,"noIndex":6,"ogImage":1221,"ogUrl":1222,"ogSiteName":675,"ogType":676,"canonicalUrls":1222,"schema":1223},"GitLab enhances DevOps journey on Linux on IBM Z and Red Hat OpenShift","GitLab integrates with IBM Linux on Z and RedHat OpenShift to help app developers deploy to more resilient systems.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681581/Blog/Hero%20Images/gitlab-linux-ibm-z-redhat-openshift.jpg","https://about.gitlab.com/blog/gitlab-and-workloads-on-ibm-z-and-red-hat-openshift","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab enhances DevOps journey on Linux on IBM Z and Red Hat OpenShift\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Vick Kelkar\"}],\n        \"datePublished\": \"2020-09-17\",\n      }",{"title":1219,"description":1220,"authors":1225,"heroImage":1221,"date":1226,"body":1227,"category":1004,"tags":1228},[1187],"2020-09-17","\n\nSeptember 2020 marks 20 years of IBM Linux on Z. If you are using DevOps practices to develop your application on IBM Z, this article is for you. You will learn about how you can leverage GitLab integrations on these resilient systems to enhance your DevOps journey.\n\n## GitLab's journey on Linux on IBM Z and Red Hat OpenShift\n\nRegardless of whether you are using IBM Z or Red Hat OpenShift, revenue-generating applications must be up and available. For example, if a banking application or Point of Sale (POS) application is down for even just five minutes, the company runs the risk of lost revenue during application downtime. This is where high availability (HA) of container platforms like Red Hat OpenShift or hardware stacks like Linux on IBM Z shine. HA strategies such as a horizontal, vertical, consensus, or distributed architectures used by these systems are outside the scope.\n\nSo, how would developers develop and deploy the revenue-generating application to resilient systems mentioned above? How can developers deploy, patch, upgrade, and scale applications in these systems using techniques such as [canary deployments](https://docs.gitlab.com/ee/user/project/canary_deployments.html)? Developers can use GitLab and the [GitLab Runner](https://docs.gitlab.com/runner/) open-source project to run GitLab CI/CD cloud-native pipelines on these resilient systems in the following ways:\n\n* GitLab can be implemented on Linux on Z using logical partitions (LPAR) and virtualization hosts Z/VM. You can learn more about running GitLab on IBM Z using the whitepaper published by the joint GitLab and IBM teams, back in 2017. Request a copy of the whitepaper by reaching out to Suchitra Joshi at IBM (suchi@ibm.com).\n\n* GitLab, with its [13.2 release](/releases/2020/07/22/gitlab-13-2-released/), announced [GitLab Runner support for Linux on IBM Z](/releases/2020/07/22/gitlab-13-2-released/#gitlab-runner-support-for-linux-on-ibm-z). The GitLab 13.2 release supports the execution of runners on Linux on Z and has a Docker image of the runner for the platform. Developers can leverage the full GitLab CI stack through the use of SSH executors on Mainframes and can take advantage of public [GitLab CI/CD examples](https://docs.gitlab.com/ee/ci/examples/).\n\n* GitLab and Red Hat teams teamed up to develop the GitLab Runner Operator for Red Hat OpenShift. You can find GitLab Runner Operator in the OpenShift embedded OperatorHub and [Red Hat container image catalog](https://catalog.redhat.com/software/containers/gitlab/gitlab-operator/5ea09928ecb5246c0903b9d5).\n\n## DevOps, cloud native, and containers\n\nCloud computing is becoming more mainstream with [enterprise](/enterprise/) IT because it offers composability, speed, and elasticity to organizations on a global scale. Cloud computing is also ideal for big transformation projects that are trying to modernize infrastructure and software development processes. Along with cloud computing, enterprises are exploring hybrid cloud and [cloud native](/topics/cloud-native/) approaches for developing and deploying their mission-critical workloads. When it comes to cloud-native approaches, [DevOps](/topics/devops/) plays a crucial role as more and more organizations are adopting modern software development methodologies to develop and scale their workloads.\n\nIt's not a hard requirement but cloud native approaches are usually coupled with containers, which are becoming basic unit of deployment. Containers allow application developers to package and scale applications using a container orchestrator like [Kubernetes](/solutions/kubernetes/).\n\n## What is GitLab?\n\nGitLab is an open source [DevOps platform](/solutions/devops-platform/) delivered as a single application. The open source project has more than 3,000 contributors and a growing [community](/community/). GitLab fundamentally accelerates the software development lifecycle while addressing important enterprise concerns such as security and compliance. GitLab helps organizations with collaboration, version control, continuous integration (CI), continuous delivery (CD) and [DevSecOps](/solutions/security-compliance/) workflows. GitLab can integrate with existing tools using custom webhooks as well. Read up on GitLab [features](/pricing/feature-comparison/) to learn how to improve developer productivity.\n\n## Looking forward\n\nGitLab aims to help developers deploy their mission-critical applications to the resilient systems of their choice. As the joint teams increase their collaboration, we plan to announce the availability of GitLab on OpenShift in the future. You can follow the progress in the [engineering epic](https://gitlab.com/gitlab-org/gl-openshift).\n\n## Resources\n\n* [GitLab achieves CNCF KCSP status](/blog/gitlab-achieves-kcsp-status/)\n* [GitLab Runner the OpenShift Way](https://www.openshift.com/blog/installing-the-gitlab-runner-the-openshift-way)\n* [Why Linux on Z mainframe?](https://www.ibm.com/it-infrastructure/z/os/linux)\n* [Integrating IBM z/OS platform in CI pipelines with Gitlab](http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102827)\n* [GitLab on Red Hat](/partners/technology-partners/redhat/)\n* [Try OpenShift](https://www.openshift.com/try)\n\nCover image by [Matt Howard](https://unsplash.com/@thematthoward) on [Unsplash](https://unsplash.com)\n{: .note}\n",[9,727],{"slug":1230,"featured":6,"template":688},"gitlab-and-workloads-on-ibm-z-and-red-hat-openshift","content:en-us:blog:gitlab-and-workloads-on-ibm-z-and-red-hat-openshift.yml","Gitlab And Workloads On Ibm Z And Red Hat Openshift","en-us/blog/gitlab-and-workloads-on-ibm-z-and-red-hat-openshift.yml","en-us/blog/gitlab-and-workloads-on-ibm-z-and-red-hat-openshift",{"_path":1236,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1237,"content":1243,"config":1249,"_id":1251,"_type":13,"title":1252,"_source":15,"_file":1253,"_stem":1254,"_extension":18},"/en-us/blog/gitlab-chart-works-towards-kubernetes-1-22",{"title":1238,"description":1239,"ogTitle":1238,"ogDescription":1239,"noIndex":6,"ogImage":1240,"ogUrl":1241,"ogSiteName":675,"ogType":676,"canonicalUrls":1241,"schema":1242},"GitLab Chart works towards Kubernetes 1.22","New minimum version is 1.19 for in-chart NGINX Ingress Controller.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749670178/Blog/Hero%20Images/GitLab-Ops.png","https://about.gitlab.com/blog/gitlab-chart-works-towards-kubernetes-1-22","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab Chart works towards Kubernetes 1.22\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"GitLab\"}],\n        \"datePublished\": \"2021-12-17\",\n      }",{"title":1238,"description":1239,"authors":1244,"heroImage":1240,"date":1245,"body":1246,"category":683,"tags":1247},[1166],"2021-12-17","\n\nWe are working to make the GitLab Chart and the GitLab Operator support Kubernetes 1.22, which requires updating the NGINX Ingress Controller used within the Chart and Operator.\n\nThis update requires that we drop support for versions of Kubernetes prior to 1.19 if using the in-chart NGINX Ingress Controller. Users that still require support for Kubernetes 1.18 and prior releases will only be able to deploy up to Chart version 5.5.x.\n\n## More details on the changes\n\nGitLab uses a [forked version](https://docs.gitlab.com/charts/charts/nginx/fork.html) of the community-supported ingress-nginx Chart to expose the GitLab components via Ingresses. \n\nSupporting Kubernetes 1.22 requires updating the included NGINX Ingress Controller to [version 1.0.4](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.0.4) in order to support the networking.k8s.io/v1 API in Kubernetes 1.22. The previous networking API (networking.k8s.io/v1beta1) has been deprecated since Kubernetes 1.19 and removed in Kubernetes 1.22.\n\nAs a result of the upgrade, we are bound to the breaking change of NGINX Ingress Controller, removing support before Kubernetes 1.19. They provide more clarification in [their FAQ](https://kubernetes.github.io/ingress-nginx/#faq-migration-to-apiversion-networkingk8siov1).\n\nThe forked ingress-nginx Chart is based on [version 4.0.6](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx/4.0.6) of ingress-nginx/ingress-nginx, which uses [version 1.0.4](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.0.4) of the NGINX Ingress Controller.\n\n## Who is impacted\n\nAny deployment which is making use of the NGINX Ingress Controller provided by the GitLab Chart. This covers most, but far from all, users of our Helm Chart and Operator. If you are using an alternate Ingress provider (such as AWS ALB, Azure Application Gateway, or Google GCE Ingress), you will not be affected.\n\n## What to expect\n\nWe recognize that this change may have unintended effects, but most GitLab instances will seamlessly transition to the new NGINX Ingress Controller without incident. As always, we recommend a backup be created prior to upgrading the GitLab Chart or GitLab Operator, which will allow your data to be safeguarded should a recovery be necessary, caused by complications in the upgrade.\n\nDepending upon the environment and/or cloud provider, it is possible that when NGINX Ingress Controller is replaced during the upgrade process that the IP addresses associated with the Ingresses may change. This may require that the DNS records for the GitLab instance be updated if a controller such as external-dns is not managing the DNS records. The DNS records related to the following Ingress objects may be affected:\n\n* gitlab.\n* registry.\n* minio. (if used)\n* kas. (if used)\n\nIf the GitLab Pages component is enabled, there may be other DNS records that will need to be updated to connect to the proper Ingress.\n\n## What if there is a problem with the upgrade?\n\nWhile it is not expected that an upgrade will cause a problem, not all environments or configurations can be anticipated. In the event that there is an upgrade problem, please contact GitLab Support if you are a licensed customer. If you are running the Community Edition of GitLab, please open an issue in the [GitLab Chart](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/new?issue%5Bmilestone_id%5D=) or [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/issues/new?issue%5Bmilestone_id%5D=) projects.\n",[685,1248,9],"workflow",{"slug":1250,"featured":6,"template":688},"gitlab-chart-works-towards-kubernetes-1-22","content:en-us:blog:gitlab-chart-works-towards-kubernetes-1-22.yml","Gitlab Chart Works Towards Kubernetes 1 22","en-us/blog/gitlab-chart-works-towards-kubernetes-1-22.yml","en-us/blog/gitlab-chart-works-towards-kubernetes-1-22",{"_path":1256,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1257,"content":1263,"config":1269,"_id":1271,"_type":13,"title":1272,"_source":15,"_file":1273,"_stem":1274,"_extension":18},"/en-us/blog/gitlab-ci-on-google-kubernetes-engine",{"title":1258,"description":1259,"ogTitle":1258,"ogDescription":1259,"noIndex":6,"ogImage":1260,"ogUrl":1261,"ogSiteName":675,"ogType":676,"canonicalUrls":1261,"schema":1262},"GitLab CI/CD on Google Kubernetes Engine in 15 minutes or less","Install GitLab's Runner on GKE in a few simple steps and get started with GitLab CI/CD pipelines.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749667003/Blog/Hero%20Images/gke_in_15_cover_2.jpg","https://about.gitlab.com/blog/gitlab-ci-on-google-kubernetes-engine","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab CI/CD on Google Kubernetes Engine in 15 minutes or less\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Elliot Rushton\"}],\n        \"datePublished\": \"2020-03-27\",\n      }",{"title":1258,"description":1259,"authors":1264,"heroImage":1260,"date":1266,"body":1267,"category":683,"tags":1268},[1265],"Elliot Rushton","2020-03-27","If you use [GitLab Self-Managed](/pricing/#self-managed), then getting started with GitLab CI using [GitLab's integration with Google Kubernetes Engine (GKE)](/partners/technology-partners/google-cloud-platform/) can be accomplished in a few simple steps. We have several blog posts and documentation that provide detailed [setup instructions for working with Kubernetes clusters](#other-resources). In this post, we highlight the essential steps so that you can get going with GitLab CI/CD in less than 15 minutes.\n\nBy using the GitLab and GKE integration, with one click, you install GitLab Runners on GKE and immediately start running your CI pipelines. Runners are the lightweight agents that execute the CI jobs in your [GitLab CI/CD](/topics/ci-cd/) pipeline.\n\n## Prerequisites:\n\nThe following pre-requisities will need to have been configured in order for you to use the built in GitLab GKE integration:\n- GitLab instance installed and configured with user credentials\n- [Google OAuth2 OmniAuth Provider](https://docs.gitlab.com/ee/integration/google.html) installed and configured on your GitLab instance\n- A Google Cloud project with the following [APIs enabled](https://docs.gitlab.com/ee/integration/google.html#enabling-google-oauth):\n  - Google Kubernetes Engine API\n  - Cloud Resource Manager API\n  - Cloud Billing API\n\n## Get started\n\n![Setup pipeline](https://about.gitlab.com/images/blogimages/ci-gke-in-15/gke_in_15_pipeline.png){: .shadow.medium.center}\n\n### Step 1\n\nWe’re going to add a shared runner at the instance level. First, as an administrator, click the “Admin Area” icon\n\n![Runner setup step 1](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_001.png){: .shadow.medium.center}\n\nThen on the left menu, select “Kubernetes”\n\n![Runner setup step 2](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_002.png){: .shadow.medium.center}\n\n### Step 2\n\nClick the green “Add Kubernetes cluster” button.\n\n![Runner setup step 3](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_003.png){: .shadow.medium.center}\n\n### Step 3\n\nThe screen to “Add a Kubernetes cluster integration” should come up. Click on the “Google GKE” icon on the right.\n\n![Runner setup step 4](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_004.png){: .shadow.medium.center}\n\n### Step 4\n\nGive your cluster a name, and select a “Google Cloud Platform project” from your linked GCP account. If no projects are populated in the menu then either your Google OAUTH2 integration isn’t configured correctly or your project is missing the needed permissions. Check that these are set up and that the [APIs mentioned in the prerequisites above](#prerequisites) are enabled.\n\nChoose a zone in which to run your cluster. For the purposes of running CI, the number of nodes in your cluster is going to be how many simultaneous jobs you can run at given time. As we are using the built-in GitLab Google Kubernetes integration, you can set a maximum of four nodes.\nHere we set that to three.\n\nClick “Create Kubernetes Cluster”\n\n![Runner setup step 5](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_005.png){: .shadow.medium.center}\n\nIt takes a few minutes for the cluster to be created. While it’s happening you should see a screen like this. You can leave this screen and come back (by going to “Admin Area> Kubernetes > [your cluster name]”)\n\n![Runner setup step 6](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_006.png){: .shadow.medium.center}\n\n### Step 5\n\nOnce the cluster has been created, we need to install two applications. First, install “Helm Tiller” by clicking on the “Install” button next to it.\n\n![Runner setup step 7](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_007.png){: .shadow.medium.center}\n\nThis takes a moment, but should be much quicker than creating the cluster initially was.\n\n![Runner setup step 8](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_008.png){: .shadow.medium.center}\n\n### Step 6\n\nNow that Helm Tiller is installed, more applications can be installed. For this tutorial we only need to install the “GitLab Runner” application. Click the install button next to GitLab Runner.\n\n![Runner setup step 9](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_009.png){: .shadow.medium.center}\n\nAgain, this should go pretty quickly.\n\n![Runner setup step 10](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_010.png){: .shadow.medium.center}\n\nOnce done, the button will change to an “Uninstall” button. You’re now set up with shared runners on your GitLab instance and can run your first CI pipeline!\n\n![Runner setup step 11](https://about.gitlab.com/images/blogimages/ci-gke-in-15/ci_gke_in_15_011.png){: .shadow.medium.center}\n\n### Next steps\n\nNow that you are up and running with GitLab CI/CD on GKE, you can build and run your first GitLab CI/CD pipeline. Here are links to a few resources to get you started.\n\n- [Getting Started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/)\n- [How to build a CI/CD pipeline in 20 minutes or less](/blog/building-a-cicd-pipeline-in-20-mins/)\n- [Getting started with Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/cloud_deployments/auto_devops_with_gke.html)\n\nIf you are planning to manage your own fleet of GitLab Runners, then you may also be thinking about how best to set up autoscaling of GitLab Runners. As we have just set up your first Runner on GKE, then you can review the [GitLab Runner Kubernetes Executor docs](https://docs.gitlab.com/runner/executors/kubernetes.html) for additional details as to how the GitLab Runner uses Kubernetes to run builds on a Kubernetes cluster.\n\n### Other resources\n\n- [Scalable app depoyment webcast](https://about.gitlab.com/webcast/scalable-app-deploy/)\n- [Install GitLab on a cloud native environment](https://docs.gitlab.com/charts/)\n- [Adding and removing Kubernetes clusters](https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html)\n- [Deploy production-ready GitLab on Google Kubernetes Engine](https://cloud.google.com/solutions/deploying-production-ready-gitlab-on-gke)\n\nCover image by [Agê Barros](https://unsplash.com/photos/rBPOfVqROzY) on [Unsplash](https://www.unsplash.com)\n{: .note}\n",[232,9,727,108,1150,1149],{"slug":1270,"featured":6,"template":688},"gitlab-ci-on-google-kubernetes-engine","content:en-us:blog:gitlab-ci-on-google-kubernetes-engine.yml","Gitlab Ci On Google Kubernetes Engine","en-us/blog/gitlab-ci-on-google-kubernetes-engine.yml","en-us/blog/gitlab-ci-on-google-kubernetes-engine",{"_path":1276,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1277,"content":1282,"config":1290,"_id":1292,"_type":13,"title":1293,"_source":15,"_file":1294,"_stem":1295,"_extension":18},"/en-us/blog/gitlab-com-stability-post-gcp-migration",{"title":1278,"description":1279,"ogTitle":1278,"ogDescription":1279,"noIndex":6,"ogImage":1140,"ogUrl":1280,"ogSiteName":675,"ogType":676,"canonicalUrls":1280,"schema":1281},"What's up with GitLab.com? Check out the latest data on its stability","Let's take a look at the data on the stability of GitLab.com from before and after our recent migration from Azure to GCP, and dive into why things are looking up.","https://about.gitlab.com/blog/gitlab-com-stability-post-gcp-migration","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"What's up with GitLab.com? Check out the latest data on its stability\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Andrew Newdigate\"}],\n        \"datePublished\": \"2018-10-11\",\n      }",{"title":1278,"description":1279,"authors":1283,"heroImage":1140,"date":1285,"body":1286,"category":683,"tags":1287},[1284],"Andrew Newdigate","2018-10-11","\nThis post is inspired by [this comment on Reddit](https://www.reddit.com/r/gitlab/comments/9f71nq/thanks_gitlab_team_for_improving_the_stability_of/),\nthanking us for improving the stability of GitLab.com. Thanks, hardwaresofton! Making GitLab.com\nready for your mission-critical workloads has been top of mind for us for some time, and it's\ngreat to hear that users are noticing a difference.\n\n_Please note that the numbers in this post differ slightly from the Reddit post as the data has changed since that post._\n\nWe will continue to work hard on improving the availability and stability of the platform. Our\ncurrent goal is to achieve 99.95 percent availability on GitLab.com – look out for an upcoming\npost about how we're planning to get there.\n\n## GitLab.com stability before and after the migration\n\nAccording to [Pingdom](http://stats.pingdom.com/81vpf8jyr1h9), GitLab.com's availability for the year to date, up until the migration was **[99.68 percent](https://docs.google.com/spreadsheets/d/1uJ_zacNvJTsvJUfNpi1D_aPBg-vNJC1xJzsSwGKKt8g/edit#gid=527563485&range=F2)**, which equates to about 32 minutes of downtime per week on average.\n\nSince the migration, our availability has improved greatly, although we have much less data to compare with than in Azure.\n\n![Availability Chart](https://docs.google.com/spreadsheets/d/e/2PACX-1vQg_tdtdZYoC870W3u2R2icSK0Rd9qoOtDJqYHALaQlzhxXOmfY63X1NMMyFVEypQs7NngR4UUIZx5R/pubchart?oid=458170195&format=image)\n\nUsing data publicly available from Pingdom, here are some stats about our availability for the year to date:\n\n| Period                                 | Mean-time between outage events |\n| -------------------------------------- | ------------------------------- |\n| Pre-migration (Azure)                  | **1.3 days**                    |\n| Post-migration (GCP)                   | **7.3 days**                    |\n| Post-migration (GCP) excluding 1st day | **12 days**                     |\n\nThis is great news: we're experiencing outages less frequently. What does this mean for our availability, and are we on track to achieve our goal of 99.95 percent?\n\n| Period                    | Availability                                                                                                                   | Downtime per week |\n| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------- |\n| Pre-migration (Azure)     | **[99.68%](https://docs.google.com/spreadsheets/d/1uJ_zacNvJTsvJUfNpi1D_aPBg-vNJC1xJzsSwGKKt8g/edit#gid=527563485&range=F2)**  | **32 minutes**    |\n| Post-migration (GCP)      | **[99.88 %](https://docs.google.com/spreadsheets/d/1uJ_zacNvJTsvJUfNpi1D_aPBg-vNJC1xJzsSwGKKt8g/edit#gid=527563485&range=B3)** | **13 minutes**    |\n| Target – not yet achieved | **99.95%**                                                                                                                     | **5 minutes**     |\n\nDropping from 32 minutes per week average downtime to 13 minutes per week means we've experienced a **61 percent improvement** in our availability following our migration to Google Cloud Platform.\n\n## Performance\n\nWhat about the performance of GitLab.com since the migration?\n\nPerformance can be tricky to measure. In particular, averages are a terrible way of measuring performance, since they neglect outlying values. One of the better ways to measure performance is with a latency histogram chart. To do this, we imported the GitLab.com access logs for July (for Azure) and September (for Google Cloud Platform) into [Google BigQuery](https://cloud.google.com/bigquery/), then selected the 100 most popular endpoints for each month and categorised these as either API, web, git, long-polling, or static endpoints. Comparing these histograms side-by-side allows us to study how the performance of GitLab.com has changed since the migration.\n\n![GitLab.com Latency Histogram](https://about.gitlab.com/images/blogimages/whats-up-with-gitlab-com/azure_v_gcp_latencies.gif)\n\nIn this histogram, higher values on the left indicate better performance. The right of the graph is the \"_tail_\", and the \"_fatter the tail_\", the worse the user experience.\n\nThis graph shows us that with the move to GCP, more requests are completing within a satisfactory amount of time.\n\nHere's two more graphs showing the difference for API and Git requests respectively.\n\n![API Latency Histogram](https://about.gitlab.com/images/blogimages/whats-up-with-gitlab-com/api-performance-histogram.png)\n\n![Git Latency Histogram](https://about.gitlab.com/images/blogimages/whats-up-with-gitlab-com/git-performance-histogram.png)\n\n## Why these improvements?\n\nWe chose Google Cloud Platform because we believe that Google offer the most reliable cloud platform for our workload, particularly as we move towards running GitLab.com in [Kubernetes](/solutions/kubernetes/).\n\nHowever, there are many other reasons unrelated to our change in cloud provider for these improvements to stability and performance.\n\n> #### _“We chose Google Cloud Platform because we believe that Google offer the most reliable cloud platform for our workload”_\n\nLike any large SaaS site, GitLab.com is a large, complicated system, and attributing availability changes to individual changes is extremely difficult, but here are a few factors which may be effecting our availability and performance:\n\n### Reason #1: Our Gitaly Fleet on GCP is much more powerful than before\n\nGitaly is responsible for all Git access in the GitLab application. Before Gitaly, Git access occurred directly from within Rails workers. Because of the scale we run at, we require many servers serving the web application, and therefore, in order to share git data between all workers, we relied on NFS volumes. Unfortunately this approach doesn't scale well, which led to us building Gitaly, a dedicated Git service.\n\n> #### _“We've opted to give our fleet of 24 Gitaly servers a serious upgrade”_\n\n#### Our upgraded Gitaly fleet\n\nAs part of the migration, we've opted to give our fleet of 24 [Gitaly](/blog/the-road-to-gitaly-1-0/) servers a serious upgrade. If the old fleet was the equivalent of a nice family sedan, the new fleet are like a pack of snarling musclecars, ready to serve your Git objects.\n\n| Environment | Processor                       | Number of cores per instance | RAM per instance |\n| ----------- | ------------------------------- | ---------------------------- | ---------------- |\n| Azure       | Intel Xeon Ivy Bridge @ 2.40GHz | 8                            | 55GB             |\n| GCP         | Intel Xeon Haswell @ 2.30GHz    | **32**                       | **118GB**        |\n\nOur new Gitaly fleet is much more powerful. This means that Gitaly can respond to requests more quickly, and deal better with unexpected traffic surges.\n\n#### IO performance\n\nAs you can probably imagine, serving [225TB of Git data](https://dashboards.gitlab.com/d/ZwfWfY2iz/vanity-metrics-dashboard?orgId=1) to roughly half-a-million active users a week is a fairly IO-heavy operation. Any performance improvements we can make to this will have a big impact on the overall performance of GitLab.com.\n\nFor this reason, we've focused on improving performance here too.\n\n| Environment | RAID         | Volumes | Media    | filesystem | Performance                                                            |\n| ----------- | ------------ | ------- | -------- | ---------- | ---------------------------------------------------------------------- |\n| Azure       | RAID 5 (lvm) | 16      | magnetic | xfs        | 5k IOPS, 200MB/s (_per disk_) / 32k IOPS **1280MB/s** (_volume group_) |\n| GCP         | No raid      | 1       | **SSD**  | ext4       | **60k read IOPs**, 30k write IOPs, 800MB/s read 200MB/s write          |\n\nHow does this translate into real-world performance? Here are average read and write times across our Gitaly fleet:\n\n##### IO performance is much higher\n\nHere are some comparative figures for our Gitaly fleet from Azure and GCP. In each case, the performance in GCP is much better than in Azure, although this is what we would expect given the more powerful fleet.\n\n[![Disk read time graph](https://docs.google.com/spreadsheets/d/e/2PACX-1vQg_tdtdZYoC870W3u2R2icSK0Rd9qoOtDJqYHALaQlzhxXOmfY63X1NMMyFVEypQs7NngR4UUIZx5R/pubchart?oid=458168633&format=image)](https://docs.google.com/spreadsheets/d/1uJ_zacNvJTsvJUfNpi1D_aPBg-vNJC1xJzsSwGKKt8g/edit#gid=1002437172) [![Disk write time graph](https://docs.google.com/spreadsheets/d/e/2PACX-1vQg_tdtdZYoC870W3u2R2icSK0Rd9qoOtDJqYHALaQlzhxXOmfY63X1NMMyFVEypQs7NngR4UUIZx5R/pubchart?oid=884528549&format=image)](https://docs.google.com/spreadsheets/d/1uJ_zacNvJTsvJUfNpi1D_aPBg-vNJC1xJzsSwGKKt8g/edit#gid=1002437172) [![Disk Queue length graph](https://docs.google.com/spreadsheets/d/e/2PACX-1vQg_tdtdZYoC870W3u2R2icSK0Rd9qoOtDJqYHALaQlzhxXOmfY63X1NMMyFVEypQs7NngR4UUIZx5R/pubchart?oid=2135164979&format=image)](https://docs.google.com/spreadsheets/d/1uJ_zacNvJTsvJUfNpi1D_aPBg-vNJC1xJzsSwGKKt8g/edit#gid=1002437172)\n\nNote: For reference: for Azure, this uses the average times for the week leading up to the failover. For GCP, it's an average for the week up to October 2, 2018.\n\nThese stats clearly illustrate that our new fleet has far better IO performance than our old cluster. Gitaly performance is highly dependent on IO performance, so this is great news and goes a long way to explaining the performance improvements we're seeing.\n\n### Reason #2: Fewer \"unicorn worker saturation\" errors\n\n![HTTP 503 Status GitLab](https://about.gitlab.com/images/blogimages/whats-up-with-gitlab-com/facepalm-503.png)\n\nUnicorn worker saturation sounds like it'd be a good thing, but it's really not!\n\nWe ([currently](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/1899)) rely on [unicorn](https://bogomips.org/unicorn/), a Ruby/Rack http server, for serving much of the application. Unicorn uses a single-threaded model, which uses a fixed pool of workers processes. Each worker can handle only one request at a time. If the worker gives no response within 60 seconds, it is terminated and another process is spawned to replace it.\n\n> #### _“Unicorn worker saturation sounds like it'd be a good thing, but it's really not!”_\n\nAdd to this the lack of autoscaling technologies to ramp the fleet up when we experience high load volumes, and this means that GitLab.com has a relatively static-sized pool of workers to handle incoming requests.\n\nIf a Gitaly server experiences load problems, even fast [RPCs](https://en.wikipedia.org/wiki/Remote_procedure_call) that would normally only take milliseconds, could take up to several seconds to respond – thousands of times slower than usual. Requests to the unicorn fleet that communicate with the slow server will take hundreds of times longer than expected. Eventually, most of the fleet is handling requests to that affected backend server. This leads to a queue which affects all incoming traffic, a bit like a tailback on a busy highway caused by a traffic jam on a single offramp.\n\nIf the request gets queued for too long – after about 60 seconds – the request will be cancelled, leading to a 503 error. This is indiscriminate – all requests, whether they interact with the affected server or not, will get cancelled. This is what I call unicorn worker saturation, and it's a very bad thing.\n\nBetween February and August this year we frequently experienced this phenomenon.\n\nThere are several approaches we've taken to dealing with this:\n\n- **Fail fast with aggressive timeouts and circuitbreakers**: Timeouts mean that when a Gitaly request is expected to take a few milliseconds, they time out after a second, rather than waiting for the request to time out after 60 seconds. While some requests will still be affected, the cluster will remain generally healthy. Gitaly currently doesn't use circuitbreakers, but we plan to add this, possibly using [Istio](https://istio.io/docs/tasks/traffic-management/circuit-breaking/) once we've moved to Kubernetes.\n\n- **Better abuse detection and limits**: More often than not, server load spikes are driven by users going against our fair usage policies. We built tools to better detect this and over the past few months, an abuse team has been established to deal with this. Sometimes, load is driven through huge repositories, and we're working on reinstating fair-usage limits which prevent 100GB Git repositories from affecting our entire fleet.\n\n- **Concurrency controls and rate limits**: For limiting the blast radius, rate limiters (mostly in HAProxy) and concurrency limiters (in Gitaly) slow overzealous users down to protect the fleet as a whole.\n\n### Reason #3: GitLab.com no longer uses NFS for any Git access\n\nIn early September we disabled Git NFS mounts across our worker fleet. This was possible because Gitaly had reached v1.0: the point at which it's sufficiently complete. You can read more about how we got to this stage in our [Road to Gitaly blog post](/blog/the-road-to-gitaly-1-0/).\n\n### Reason #4: Migration as a chance to reduce debt\n\nThe migration was a fantastic opportunity for us to improve our infrastructure, simplify some components, and otherwise make GitLab.com more stable and more observable, for example, we've rolled out new **structured logging infrastructure**.\n\nAs part of the migration, we took the opportunity to move much of our logging across to structured logs. We use [fluentd](https://www.fluentd.org/), [Google Pub/Sub](https://cloud.google.com/pubsub/docs/overview), [Pubsubbeat](https://github.com/GoogleCloudPlatform/pubsubbeat), storing our logs in [Elastic Cloud](https://www.elastic.co/cloud) and [Google Stackdriver Logging](https://cloud.google.com/logging/). Having reliable, indexed logs has allowed us to reduce our mean-time to detection of incidents, and in particular detect abuse. This new logging infrastructure has also been invaluable in detecting and resolving several security incidents.\n\n> #### _“This new logging infrastructure has also been invaluable in detecting and resolving several security incidents”_\n\nWe've also focused on making our staging environment much more similar to our production environment. This allows us to test more changes, more accurately, in staging before rolling them out to production. Previously the team was maintaining\na limited scaled-down staging environment and many changes were not adequately tested before being rolled out. Our environments now share a common configuration and we're working to automate all [terraform](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5079) and [chef](https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/5078) rollouts.\n\n### Reason #5: Process changes\n\nUnfortunately many of the worst outages we've experienced over the past few years have been self-inflicted. We've always been transparent about these — and will continue to be so — but as we rapidly grow, it's important that our processes scale alongside our systems and team.\n\n> #### _“It's important that our processes scale alongside our systems and team”_\n\nIn order to address this, over the past few months, we've formalized our change and incident management processes. These processes respectively help us to avoid outages and resolve them quicker when they do occur.\n\nIf you're interested in finding out more about the approach we've taken to these two vital disciplines, they're published in our handbook:\n\n- [GitLab.com's Change Management Process](/handbook/engineering/infrastructure/change-management/)\n- [GitLab.com's Incident Management Process](/handbook/engineering/infrastructure/incident-management/)\n\n### Reason #6: Application improvement\n\nEvery GitLab release includes [performance and stability improvements](https://gitlab.com/gitlab-org/gitlab-ce/issues?scope=all&state=opened&label_name%5B%5D=performance); some of these have had a big impact on GitLab's stability and performance, particularly n+1 issues.\n\nTake Gitaly for example: like other distributed systems, Gitaly can suffer from a class of performance degradations known as \"n+1\" problems. This happens when an endpoint needs to make many queries (_\"n\"_) to fulfill a single request.\n\n> Consider an imaginary endpoint which queried Gitaly for all tags on a repository, and then issued an additional query for each tag to obtain more information. This would result in n + 1 Gitaly queries: one for the initial tag, and then n for the tags. This approach would work fine for a project with 10 tags – issuing 11 requests, but a project with 1000 tags, this would result in 1001 Gitaly calls, each with a round-trip time, and issued in sequence.\n\n![Latency drop in Gitaly endpoints](https://about.gitlab.com../../images/blogimages/whats-up-with-gitlab-com/drop-off.png)\n\nUsing data from Pingdom, this chart shows long-term performance trends since the start of the year. It's clear that latency improved a great deal on May 7, 2018. This date happens to coincide with the RC1 release of GitLab 10.8, and its deployment on GitLab.com.\n\nIt turns out that this was due to a [single fix on n+1 on the merge request page being resolved](https://gitlab.com/gitlab-org/gitlab-ce/issues/44052).\n\nWhen running in development or test mode, GitLab now detects n+1 situations and we have compiled [a list of known n+1s](https://gitlab.com/gitlab-org/gitlab-ce/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=performance&label_name[]=Gitaly&label_name[]=technical%20debt). As these are resolved we expect even more performance improvements.\n\n![GitLab Summit - South Africa - 2018](https://about.gitlab.com/images/summits/2018_south-africa_team.jpg)\n\n### Reason #7: Infrastructure team growth and reorganization\n\nAt the start of May 2018, the Infrastructure team responsible for GitLab.com consisted of five engineers.\n\nSince then, we've had a new director join the Infrastructure team, two new managers, a specialist [Postgres DBRE](https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/13778), and four new [SREs](https://handbook.gitlab.com/job-families/engineering/infrastructure/site-reliability-engineer/). The database team has been reorganized to be an embedded part of infrastructure group. We've also brought in [Ongres](https://www.ongres.com/), a specialist Postgres consultancy, to work alongside the team.\n\nHaving enough people in the team has allowed us to be able to split time between on-call, tactical improvements, and longer-term strategic work.\n\nOh, and we're still hiring! If you're interested, check out [our open positions](/jobs/) and choose the Infrastructure Team 😀\n\n## TL;DR: Conclusion\n\n1. GitLab.com is more stable: availability has improved 61 percent since we migrated to GCP\n1. GitLab.com is faster: latency has improved since the migration\n1. We are totally focused on continuing these improvements, and we're building a great team to do it\n\nOne last thing: our Grafana dashboards are open, so if you're interested in digging into our metrics in more detail, visit [dashboards.gitlab.com](https://dashboards.gitlab.com) and explore!\n",[1150,1149,1288,9,1004,1289],"inside GitLab","performance",{"slug":1291,"featured":6,"template":688},"gitlab-com-stability-post-gcp-migration","content:en-us:blog:gitlab-com-stability-post-gcp-migration.yml","Gitlab Com Stability Post Gcp Migration","en-us/blog/gitlab-com-stability-post-gcp-migration.yml","en-us/blog/gitlab-com-stability-post-gcp-migration",{"_path":1297,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1298,"content":1303,"config":1308,"_id":1310,"_type":13,"title":1311,"_source":15,"_file":1312,"_stem":1313,"_extension":18},"/en-us/blog/gitlab-eks-integration-how-to",{"title":1299,"description":1300,"ogTitle":1299,"ogDescription":1300,"noIndex":6,"ogImage":974,"ogUrl":1301,"ogSiteName":675,"ogType":676,"canonicalUrls":1301,"schema":1302},"How to create a Kubernetes cluster on Amazon EKS in GitLab","A Kubernetes tutorial: Create clusters in a few clicks with GitLab and Amazon EKS.","https://about.gitlab.com/blog/gitlab-eks-integration-how-to","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to create a Kubernetes cluster on Amazon EKS in GitLab\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Abubakar Siddiq Ango\"}],\n        \"datePublished\": \"2020-03-09\",\n      }",{"title":1299,"description":1300,"authors":1304,"heroImage":974,"date":1305,"body":1306,"category":683,"tags":1307},[980],"2020-03-09","Kubernetes has created a whole new world for running infrastructure at\nscale. With the right setup, an application can go from serving a few users\nto millions effortlessly. Setting up Kubernetes can be tasking and can\nrequire a lot of expertise to put all the pieces together. You’ll need to\nset up virtual or bare metal machines to use as nodes and manage SSL\ncertificates, networking, load balancers, and many other moving parts.\n\n\nThe introduction of Amazon Elastic Kubernetes Service (EKS) was widely\napplauded as it streamlines the abstraction of the complexities in an\nenvironment most organizations are already familiar with and on a provider\nthey already trust. Amazon EKS makes creating and managing Kubernetes\nclusters easier with more granular controls around security and\nstraightforward policies of how resources are used.\n\n\nGitLab strives to increase developer productivity by automating repetitive\ntasks and allowing developers to focus on business logic. We recently\nintroduced support for auto-creating Kubernetes clusters on Amazon EKS. In a\nfew clicks with the right permissions, you’ll have a fully functional\nKubernetes cluster on Amazon EKS. It doesn’t stop there however – GitLab\nalso gives you the power to achieve the following use cases and more :\n\n\n* [Highly scalable CI/CD system using GitLab\nRunner](https://docs.gitlab.com/runner/): There are times like holidays when\nlittle to no changes to code are pushed to production, so why keep resources\ntied down? With the Amazon EKS integration with GitLab, you can install\nGitLab Runner with just a click and your CI/CD will run effortlessly without\nworrying about running out of resources.\n\n* Shared Cluster: Maintaining multiple Kubernetes clusters can be a pain and\ncapital intensive. With Amazon EKS, GitLab allows you to setup a cluster at\n[Instance](https://docs.gitlab.com/ee/user/instance/clusters/index.html),\n[Group](https://docs.gitlab.com/ee/user/group/clusters/index.html) and\n[Project](https://docs.gitlab.com/ee/user/project/clusters/) levels.\nKubernetes Namespaces are created for each GitLab project when the Amazon\nEKS is integrated at Instance and Project level, allowing isolation and\nensuring security.\n\n* [Review Apps](https://docs.gitlab.com/ee/ci/review_apps/index.html):\nReviewing changes to code or design can be tricky, you’ll need to check out\nyour branch and run the code in a test environment. GitLab integrated with\nAmazon EKS deploys your app with new changes to a dynamic environment and\nall you need to do is click on a “View App“ button to review changes.\n\n*\n[AutoDevOps](https://docs.gitlab.com/ee/topics/autodevops/cloud_deployments/auto_devops_with_gke.html)\ntakes DevOps to a whole new level. AutoDevOps detects, builds, tests,\ndeploys, and monitors your applications, leveraging the Amazon EKS\nintegration. All you have to do is push your code and the magic happens. In\nthis tutorial, we will deploy a sample application to the Amazon EKS cluster\nwe will be creating using AutoDevOps.\n\n\nTo show you how easy it is to create an Amazon EKS cluster from GitLab, the\nrest of this tutorial will walk you through the steps of the integration,\nstarting with a one-time setup of necessary resources on AWS.\n\n\n## One-time setup on AWS to access resources\n\n\nFirst, we need to create a “provision\" role and a “service” role on AWS to\ngrant GitLab access to your AWS resources and set up the necessary\npermissions to create and manage EKS clusters. You only need to perform\nthese steps once and you can reuse them anytime you want to perform another\nintegration or create more clusters.\n\n\n### Step 1 - Create Provision Role\n\n\nTo grant GitLab access to your AWS resources, a “provision role” is\nrequired. Let’s create one:\n\n\n1. Access GitLab Kubernetes Integration Page by clicking on the ”Kubernetes”\nmenu for groups and Operations > Kubernetes menu for projects and click the\n“Add Kubernetes Cluster” button.\n\n2. Select “Amazon EKS” in the options provided under the “Create new cluster\non EKS” tab.\n\n3. You are provided with an Account and External ID  to use for\nauthentication. Make note of these values to be used in a later step.\n\n    ![Gitlab EKS Integration Page](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/gitlab_eks_integration_page.png)\n\n4. Open IAM Management Console in another tab and click on “Create Role”\n\n5. Click on the “Another AWS account” tab and provide the Account and\nExternal ID obtained from GitLab and click Next to set permissions as shown\nbelow:\n\n    ![AWS Provision Role](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/provision_role.png)\n\n6. On the permissions page, click on “Create policy.” This will open a new\ntab where you can set either of the permissions below using JSON:\n\n    ```json\n    {\n        \"Version\": \"2012-10-17\",\n        \"Statement\": [\n            {\n                \"Effect\": \"Allow\",\n                \"Action\": [\n                    \"autoscaling:*\",\n                    \"cloudformation:*\",\n                    \"ec2:*\",\n                    \"eks:*\",\n                    \"iam:*\",\n                    \"ssm:*\"\n                ],\n                \"Resource\": \"*\"\n            }\n        ]\n    }\n    ```\n\n    This gives GitLab full access to create and manage resources, as seen in the image below:\n\n    ![AWS Role Policy](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/create_role_policy.png)\n\n    If you prefer limited permission, you can give GitLab the ability to create resources, but not delete them with the JSON snippet below. The drawback here is if an error is encountered during the creation process, changes will not be rolled back and you must remove resources manually. You can do this by deleting the relevant CloudFormation stack.\n\n    ```json\n    {\n        \"Version\": \"2012-10-17\",\n        \"Statement\": [\n            {\n                \"Effect\": \"Allow\",\n                \"Action\": [\n                    \"autoscaling:CreateAutoScalingGroup\",\n                    \"autoscaling:DescribeAutoScalingGroups\",\n                    \"autoscaling:DescribeScalingActivities\",\n                    \"autoscaling:UpdateAutoScalingGroup\",\n                    \"autoscaling:CreateLaunchConfiguration\",\n                    \"autoscaling:DescribeLaunchConfigurations\",\n                    \"cloudformation:CreateStack\",\n                    \"cloudformation:DescribeStacks\",\n                    \"ec2:AuthorizeSecurityGroupEgress\",\n                    \"ec2:AuthorizeSecurityGroupIngress\",\n                    \"ec2:RevokeSecurityGroupEgress\",\n                    \"ec2:RevokeSecurityGroupIngress\",\n                    \"ec2:CreateSecurityGroup\",\n                    \"ec2:createTags\",\n                    \"ec2:DescribeImages\",\n                    \"ec2:DescribeKeyPairs\",\n                    \"ec2:DescribeRegions\",\n                    \"ec2:DescribeSecurityGroups\",\n                    \"ec2:DescribeSubnets\",\n                    \"ec2:DescribeVpcs\",\n                    \"eks:CreateCluster\",\n                    \"eks:DescribeCluster\",\n                    \"iam:AddRoleToInstanceProfile\",\n                    \"iam:AttachRolePolicy\",\n                    \"iam:CreateRole\",\n                    \"iam:CreateInstanceProfile\",\n                    \"iam:CreateServiceLinkedRole\",\n                    \"iam:GetRole\",\n                    \"iam:ListRoles\",\n                    \"iam:PassRole\",\n                    \"ssm:GetParameters\"\n                ],\n                \"Resource\": \"*\"\n            }\n        ]\n    }\n    ```\n\n    The image below visualizes what permissions are granted:\n\n    ![Limited Role Policy](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/limited_role_policy.png)\n\n7. Once the policy is created, return to the “Create Role” browser tab and\nrefresh to see the policy we created listed. Select the policy and click\n“Next.”\n\n8. In the Tags section, we don’t need to set any Tags, except if it’s\nrequired in your organization. Let’s proceed to Review.\n\n9. Specify a Name for your new Role. You will see the policy we created\nlisted under policies and click “Create Role” to complete the process.\n\n10. Click on the new Role you created in the list of Roles to view its\ndetails. You may have to search for it in the list of Roles if it’s not\nlisted in the first view. Copy the Role ARN provided – we will need it on\nthe GitLab Kubernetes Integration page.\n\n\n### Step 2 - Create Service Role\n\n\nThe Service Role is required to allow Amazon EKS and the Kubernetes control\nplane to manage AWS resources on your behalf.\n\n\n1. In the IAM Management Console, click on “Create Role” and select the “AWS\nservice” tab.\n\n2. Select EKS in the list of services and Use Cases as shown below and click\nNext.\n\n    ![Service Role](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/service_role.png)\n\n3. You will notice the “AmazonEKSClusterPolicy” and “AmazonEKSServicePolicy”\npermissions are selected; these are all we need. Click through the Tags step\nand create if necessary, then click Next to get to the Review step. Click\n“Create Role” to complete the process.\n\n    ![Role Summary](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/role_summary.png)\n\n## GitLab EKS Integration\n\n\nThis is the easy part! As mentioned earlier, you only need to create the\nProvision and Service role once if you don’t already have them in your\norganization’s AWS setup. You can reuse the roles for other integrations or\ncluster creations.\n\n\n1. Return to the GitLab Kubernetes Integration page and provide the Role ARN\nof the Provision Role we created earlier and click “Authenticate with AWS.”\n\n    ![Gitlab EKS Integration Page](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/gitlab_eks_integration_page.png)\n\n2. Once authenticated, you’ll have a page to set the parameters needed to\nset up your cluster as shown in the image below and click on “Create\nKubernetes Cluster” to let GitLab do its magic!\n\n    The parameters you’ll need to provide are:\n    * **Kubernetes cluster name** - The name you wish to give the cluster.\n    * **Environment scope** - The [GitLab environment](https://docs.gitlab.com/ee/user/project/clusters/index.html#setting-the-environment-scope) associated with this cluster; `*` denotes the cluster will be used for deployments to all environments.\n    * **Kubernetes version** - The Kubernetes version to use. Currently, the only version supported is 1.14.\n    * **Role name** - The service role we created earlier.\n    * **Region** - The [AWS region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) in which the cluster will be created.\n    * **Key pair name** - Select the [key pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) that you can use to connect to your worker nodes if required.\n    * **VPC** - Select a [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) to use for your EKS Cluster resources.\n    * **Subnets** - Choose the [subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) in your VPC where your worker nodes will run.\n    * **Security group** - Choose the [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) to apply to the EKS-managed Elastic Network Interfaces that are created in your worker node subnets. AWS provides a default group, which can be used for the purpose of this guide. However, you are advised to setup up the right rules required for your resources.\n    * **Instance type** - The AWS [instance type](https://aws.amazon.com/ec2/instance-types/) of your worker nodes.\n    * **Node count** - The number of worker nodes.\n    * **GitLab-managed cluster** - Leave this checked if you want [GitLab to manage namespaces and service accounts](https://docs.gitlab.com/ee/user/project/clusters/index.html#gitlab-managed-clusters) for this cluster.\n\n    ![Gitlab EKS Integration Page](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/gitlab_eks_integration_post_auth.png)\n\n3. The cluster creation process will take approximately 10 minutes. Once\ndone you can proceed to install some predefined applications. At the very\nleast, you need to install the following:\n    - **Helm Tiller**: This is required to install the other applications.\n    - **Ingress**: This provides SSL termination, load balancing and name-based virtual hosting you your applications. It acts as a web proxy for your application, which is useful when using AutoDevOps or deploying your own apps.\n    - **Cert Manager**: This is a native Kubernetes certificate management controller, which helps in issuing certificates using Let’s Encrypt. You don’t need this if you want to use a custom Certificate issuer.\n    - **Prometheus**: GitLab uses the Prometheus integration for automatic monitoring of your applications to collect metrics from Kubernetes containers allowing you to understand what is going on from within the GitLab UI.\n\n    ![Gitlab EKS Integration Page](https://about.gitlab.com/images/blogimages/gitlab-eks-integration/gitlab_eks_integration_post_cluster.png)\n\n4. To make use of Auto Review Apps and Auto Deploy stages of\n[AutoDevOps](https://docs.gitlab.com/ee/topics/autodevops/quick_start_guide.html),\nyou will need to specify a Base Domain name with a wild card DNS pointing to\nthe Ingress Endpoint generated when you Install Ingress in the list of\npredefined apps.\n\n\n## Summary\n\n\nIn this tutorial, we looked at how GitLab integrates with Amazon EKS,\nallowing Kubernetes clusters to be created easily from the GitLab UI after\nsetting the right permissions. As we’ve seen, developer productivity is\ngreatly improved by no longer having to manually set up clusters. Also, the\nsame cluster can be used for multiple projects when Amazon EKS is integrated\nwith GitLab at the Group and Instance levels, thus making onboarding new\nprojects a breeze. After integration, the possibilities of what developers\ncan achieve is enormous.\n\n\nIn the next part of this tutorial, we will look at how to deploy your\napplications on an Amazon EKS cluster using AutoDevOps.\n",[9,984,923],{"slug":1309,"featured":6,"template":688},"gitlab-eks-integration-how-to","content:en-us:blog:gitlab-eks-integration-how-to.yml","Gitlab Eks Integration How To","en-us/blog/gitlab-eks-integration-how-to.yml","en-us/blog/gitlab-eks-integration-how-to",{"_path":1315,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1316,"content":1322,"config":1327,"_id":1329,"_type":13,"title":1330,"_source":15,"_file":1331,"_stem":1332,"_extension":18},"/en-us/blog/gitlab-gke-autopilot",{"title":1317,"description":1318,"ogTitle":1317,"ogDescription":1318,"noIndex":6,"ogImage":1319,"ogUrl":1320,"ogSiteName":675,"ogType":676,"canonicalUrls":1320,"schema":1321},"How to use GitLab with GKE Autopilot","GitLab works out of the box with the new GKE Autopilot from Google Cloud, a managed variant of the popular Google Kubernetes Engine.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681920/Blog/Hero%20Images/kubernetes.png","https://about.gitlab.com/blog/gitlab-gke-autopilot","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to use GitLab with GKE Autopilot\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Abubakar Siddiq Ango\"}],\n        \"datePublished\": \"2021-02-24\",\n      }",{"title":1317,"description":1318,"authors":1323,"heroImage":1319,"date":1324,"body":1325,"category":1004,"tags":1326},[980],"2021-02-24","\n\nIn the cloud native landscape, there are dozens of providers that offer managed Kubernetes services. Despite the abstraction, and ease of use promised, a major problem remains: getting the node size right. You want it to match your workloads so that you don’t under-provision – making the workloads unstable – or over-provision and rake in unnecessary costs. \n\n[GKE Autopilot from Google Cloud](https://cloud.google.com/blog/products/containers-kubernetes/introducing-gke-autopilot) solves this problem by enabling your team to focus on building your solutions with a fully managed and opinionated variant of Google Kubernetes Engine (GKE), where nodes are automatically provisioned based on your workload requirements and with no need to be managed independently. \n\nGKE Autopilot uses the resource specification in the PodSpec of your deployment to provision nodes or use defaults, automatically resize the nodes, or provision new nodes as the pods’ needs change. GitLab and Google Cloud officially support several use cases, including running GitLab and GitLab Runners as workloads on GKE Autopilot clusters, as well as using GitLab CI/CD to deploy applications onto GKE Autopilot.\n\n## GitLab and GKE Autopilot\n\n### GitLab Server\n\nGitLab can be installed on GKE Autopilot easily out of the box using the official Helm Charts and can be configured to match your company’s use case, such as external object storage and database. GKE Autopilot works to ensure the right sizes and number of nodes are provisioned based on the requirements specified in the GitLab charts and your customizations. You can access other resources in Google Cloud, such as storage and databases using Google Cloud Workload Identity.\n\nAll GKE Autopilot clusters come with Google Cloud Workload Identity pre-configured. Workload Identity allows you to bind Kubernetes Service Accounts to Google Service Accounts, with whatever permission that Google Service Account has. This can include resources in other Google Cloud platform projects.\n\nIn the first part of the GitLab with GKE Autopilot demo, I demonstrate how to install GitLab on a GKE Autopilot cluster:\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/cNffh-qyXhQ\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n### GitLab Runner\n\nThe GitLab Runner can be deployed on GKE Autopilot in unprivileged mode, allowing it to only run GitLab CI jobs that do not require privileged pods or Docker in Docker due to the lack of support for privileged pods on GKE Autopilot. To build container images, [Kaniko](https://docs.gitlab.com/ee/ci/docker/using_kaniko.html) or its likes can be used as an alternative to Docker. This applies to the bundled runner in the official GitLab Helm chart or when deployed independently using the official GitLab Runner chart. This also affects jobs using GitLab Auto DevOps, but works best when an independent Runner (set up on a GKE Standard cluster or virtual machine) is registered with the GitLab server running on GKE Autopilot.\n\n### Integrating GKE Autopilot clusters\n\nGKE Autopilot clusters integrate with GitLab just like a GKE Standard cluster. There are two options, the preferred of which is to use the [GitLab Agent for Kubernetes](/blog/gitlab-kubernetes-agent-on-gitlab-com/), especially if you are concerned about security or your cluster is behind a firewall. You can learn more about this in our [detailed documentation](https://docs.gitlab.com/ee/user/clusters/agent/).\nAlternatively, you can create a cluster-admin and provide the cluster certificate and token to [integrate with the cluster](https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html).  As of the time of writing, GKE Autopilot clusters cannot be created from GitLab like standard GKE clusters. The DinD limitation also affects the runner listed in the GitLab-managed apps that you can install as part of the integration. \n\nIn the second part of the demo video, I demonstrate how to integrate GitLab with a GKE Autopilot cluster and deploy an application using Auto DevOps.\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/rCwHL3hQEWU\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n## Considerations\n\nGKE Autopilot is opinionated and less configurable than GKE Standard. As a managed service, it allows you to focus on delivering the best solutions to your users and not worry about operations; these are limitations common for such managed Kubernetes services. \n\nAdministrative access to the nodes provisioned by GKE Autopilot is not supported, thus making any operation requiring access to the nodes limited. Host options, node selectors, node affinity/anti-affinity, taints, and tolerations are other functionalities that apply at the node level in GKE Standard but are not supported in Autopilot.\n\nWhen integrating an Autopilot cluster with GitLab, you cannot install the bundled cert-manager. I encountered an error while testing, stating that `mutatingwebhookconfigurations/` is managed and access is denied in GKE Autopilot. Alternatively, you can follow the directions provided in the official cert-manager documentation.\n\n## Wrapping up\n\nGKE Autopilot is designed to implement Google Cloud-developed best practices and has been fine-tuned to provide an ideal user experience. You can move from idea to production and scale worry-free when you integrate GitLab with GKE Autopilot, allowing you to deploy and monitor your application’s health, all within GitLab. If you also choose to deploy GitLab itself on GKE Autopilot, our official Helm chart will work out of the box.\n",[9,1150,1149,232],{"slug":1328,"featured":6,"template":688},"gitlab-gke-autopilot","content:en-us:blog:gitlab-gke-autopilot.yml","Gitlab Gke Autopilot","en-us/blog/gitlab-gke-autopilot.yml","en-us/blog/gitlab-gke-autopilot",{"_path":1334,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1335,"content":1341,"config":1348,"_id":1350,"_type":13,"title":1351,"_source":15,"_file":1352,"_stem":1353,"_extension":18},"/en-us/blog/gitlab-helm-package-registry",{"title":1336,"description":1337,"ogTitle":1336,"ogDescription":1337,"noIndex":6,"ogImage":1338,"ogUrl":1339,"ogSiteName":675,"ogType":676,"canonicalUrls":1339,"schema":1340},"Introducing the GitLab Helm Package Registry","Develop and deploy cloud native applications with a built-in Helm registry.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749663397/Blog/Hero%20Images/logoforblogpost.jpg","https://about.gitlab.com/blog/gitlab-helm-package-registry","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Introducing the GitLab Helm Package Registry\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"William Chia\"}],\n        \"datePublished\": \"2021-07-26\",\n      }",{"title":1336,"description":1337,"authors":1342,"heroImage":1338,"date":1344,"body":1345,"category":1004,"tags":1346},[1343],"William Chia","2021-07-26","\n\nCloud native application architectures use containerization, microservices, and Kubernetes to run reliably at cloud-scale. With a built-in container registry and Kubernetes integration, GitLab is the best way to develop and deploy cloud native applications. [GitLab version 14.1](/releases/2021/07/22/gitlab-14-1-released/) also includes a Helm registry, which allows users to publish, install, and share Helm charts and packages from within our single application for the entire DevOps lifecycle.\n\n### What is Helm?\n\nHelm is a package manager for Kubernetes. A Chart is a Helm package that contains the resource definitions required to run an application inside a Kubernetes cluster. Helm allows you to manage complex applications by storing the application definition in a chart that can be versioned, shared, and collaborated on.\n\n### The differences between Helm Registry and Git\n\nWhy not simply store your Helm charts in a Git repository? After all, charts are YAML files that can be stored, versioned, and collaborated on like code.\n\nFor small projects and simple applications, it can be convenient to store the Helm chart in the same Git repository as the application code. However, this method starts to become unruly as the code scales. Applying this model with microservices architecture means you'd have many different charts spread out across many different repositories. Cluster-wide upgrades would certainly be a challenge. And sharing charts with other teams would require you to also grant permission to the code repository.\n\n### Comparing Helm registry and container registry\n\nAnother option for storing Helm charts is to use an OCI registry, like the GitLab Container Registry. However, this feature is new to Helm 3 and requires running Helm in experimental mode. Many organizations, especially those in highly regulated environments, prefer not to expose themselves to the additional risk of an experimental feature.\n\n### A built-in, dedicated Helm registry\n\nA Helm registry offers a centralized repository to store and share charts so large organizations can manage many complex applications in a controlled manner. The main benefits of having a dedicated registry are the security, efficiency, and reliability.\n\nWhen it comes to security, having all of the charts in one central location means they can be [systematically scanned for vulnerabilities](https://docs.gitlab.com/ee/user/application_security/sast/#supported-languages-and-frameworks). This is much more difficult to manage if your charts are stored in multiple locations. Similarly, user account and permission management is much easier to manage from a single location.\n\nA central registry also makes it much easier to distribute charts throughout your organization. Large organizations will often have a center of excellence that is responsible for creating, maintaining, and distributing charts to many different teams throughout the organization. Enabling a safe way to share charts and control access is critical.\n\nGitLab users can host all Helm charts from one central project, allowing users to control user access with SSO/SAML and authorization with deploy tokens, job tokens, or personal access tokens. Not to mention, the GitLab.com Package stage is 99.95% available.\n\n### How to get started\n\nThe new Helm Registry is currently at \"viable\" maturity. We do not recommended using it for production but it can be used for testing and planning. Visit the [Helm Repository docs](https://docs.gitlab.com/ee/user/packages/helm_repository/) for step-by-step commands to authenticate the registry and publish and install packages.\n\n### Contribute to the Helm Registry\n\nThe first iteration of the Helm registry was contributed to GitLab by community member [Mathieu Parent](https://gitlab.com/sathieu). We'd love your input and feedback and we continue to improve and mature the Helm registry capabilities. This [GitLab Epic outlines the path to make the Helm chart registry complete](https://gitlab.com/groups/gitlab-org/-/epics/6366). Comment in the epic and associated issues with your thoughts and feedback. As always, [code contributions](/community/contribute/development/) are welcome.\n",[1347,685,984,9],"collaboration",{"slug":1349,"featured":6,"template":688},"gitlab-helm-package-registry","content:en-us:blog:gitlab-helm-package-registry.yml","Gitlab Helm Package Registry","en-us/blog/gitlab-helm-package-registry.yml","en-us/blog/gitlab-helm-package-registry",{"_path":1355,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1356,"content":1361,"config":1366,"_id":1368,"_type":13,"title":1369,"_source":15,"_file":1370,"_stem":1371,"_extension":18},"/en-us/blog/gitlab-journey-from-azure-to-gcp",{"title":1357,"description":1358,"ogTitle":1357,"ogDescription":1358,"noIndex":6,"ogImage":1140,"ogUrl":1359,"ogSiteName":675,"ogType":676,"canonicalUrls":1359,"schema":1360},"GitLab’s journey from Azure to GCP","GitLab Staff Engineer Andrew Newdigate shares how we completed our migration to Google Cloud Platform, and how we overcame challenges along the way.","https://about.gitlab.com/blog/gitlab-journey-from-azure-to-gcp","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab’s journey from Azure to GCP\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Chrissie Buchanan\"}],\n        \"datePublished\": \"2019-05-02\",\n      }",{"title":1357,"description":1358,"authors":1362,"heroImage":1140,"date":1363,"body":1364,"category":300,"tags":1365},[787],"2019-05-02","\n\nLast June, we had to face the facts: Our SaaS infrastructure for GitLab.com was not ready for mission-critical workloads, error rates were just too high, and availability was too low. To address these challenges, we decided to migrate from Azure to Google Cloud Platform (GCP) and document the journey publicly, end to end. A lot has happened since [we first talked about moving to GCP](/blog/moving-to-gcp/), and we’re excited to share the results.\n\nAt [Google Cloud Next '19](https://cloud.withgoogle.com/next/sf), GitLab Staff Engineer [Andrew Newdigate](/company/team/#suprememoocow) presented our migration experience and the steps we took to make it happen. Migrations seldom go as planned but we hope that others can learn from the process. Check out the video to learn more about our journey from Azure to GCP, and find some of our key takeaways below.\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/Ve_9mbJHPXQ\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nThere were several reasons why we decided on the Google Cloud Platform. One top priority was that we wanted GitLab.com to be suitable for mission-critical workloads, and GCP offered the performance and consistency we needed. A second reason is that we believe [Kubernetes](/solutions/kubernetes/) is the future, especially with so much development geared toward [cloud native](/topics/cloud-native/). Another priority was price. For all of these reasons and more, Google was the clear choice as a partner going forward.\n\nOur company values are important to us and we apply them to all aspects of our work and our migration from Azure to GCP is no exception.\n\n## Three core values guided this project:\n\n###  Efficiency\n\nAt GitLab, [we love boring solutions](https://handbook.gitlab.com/handbook/values/#boring-solutions). The goal of the project was really simple: Move GitLab.com to GCP. We wanted to find the least complex and most straightforward solution to achieve this goal.\n\n### Iteration\n\nWe focus on shipping the [minimum viable change](https://handbook.gitlab.com/handbook/values/#minimal-viable-change-mvc) and work in steps. When we practice iteration, we get feedback faster, we’re able to course-correct, and we reduce cycle times.\n\n### Transparency\n\nWe work [publicly by default](https://handbook.gitlab.com/handbook/values/#public-by-default), which is why we made [this project accessible to everyone](https://gitlab.com/gitlab-com/migration/) and [documented our progress](https://docs.google.com/document/d/1p3Brri44_SKyakViKB-LGWCmCcwILW6z2A8a8eWFyFc/edit?usp=sharing) along the way.\n\n## How we did it\n\nLooking for the simplest solution, we considered whether we could just stop the whole site: Copy all the data from Azure to GCP, switch the DNS over to point to GCP, and then start everything up again. The problem was that we had too much data to do this within a reasonable time frame. Once we shut down the site, we'd need to copy all the data between two cloud providers, and once the copy was complete, we'd need to verify all the data (about half a petabyte) and make sure it was correct. This plan meant that GitLab.com could be down for _several days_, and considering that thousands and thousands of people rely on GitLab on a daily basis, this wouldn’t work.\n\n![GitLab Geo diagram](https://about.gitlab.com/images/gitlab_ee/gitlab_geo_diagram_migrate.png){: .medium.center}\n\nWe went back to the drawing board. We were working on another feature called [Geo](https://docs.gitlab.com/ee/administration/geo/index.html) which allows for full, read-only mirrors of GitLab instances. Besides browsing the GitLab UI, Geo instances can be used for cloning and fetching projects as well as for a planned failover to migrate GitLab instances.\n\nWe hoped that by taking advantage of the replication capabilities we were building for Geo, we could migrate the entire GitLab.com site to a secondary instance in GCP. The process might have taken weeks or months, but thankfully the site would be available throughout the synchronization process. Once all the data was synchronized to GCP, we could verify it and make sure it was correct. Finally, we could just promote the GCP environment and make it our new primary.\n\nThis new plan had many advantages over the first one. Obviously, GitLab.com would be up during the synchronization and we would only have a short period of downtime, maybe an hour or two, rather than weeks. We could do full QA, load testing, and verify all data before the failover.\n\n>\"If it could work for us on GitLab.com, it would pretty much work for any other customer who wanted to use Geo. We could be confident in that.\" - Andrew Newdigate, Infrastructure Architect at GitLab\n\n![Helm charts](https://about.gitlab.com/images/blogimages/gitlab-journey-from-azure-to-gcp/helm_charts.png){: .medium.center}\n\nWe were also working on another major project to install and run GitLab on Kubernetes. Much like Omnibus is a package installer for installing GitLab _outside_ a Kubernetes environment, GitLab’s helm charts [install GitLab inside a Kubernetes environment](https://docs.gitlab.com/charts/). The plan evolved to use helm charts to install GitLab in GCP while still using Geo for replication.\n\nIt became apparent there were problems with this approach as we went along:\n\n*   The changes we needed to make to the application to allow it to become fully cloud native were extensive and required major work.\n*   The timeframes of the GCP migration and cloud native projects wouldn’t allow us to carry them out simultaneously.\n\nWe ultimately decided it would be better to postpone the move to Kubernetes until after migration to GCP.\n\nWe went to the next iteration and decided to use Omnibus to provision the new environment. We also migrated all file artifacts, including CI Artifacts, Traces (CI log files), file attachments, LFS objects and other file uploads to [Google Cloud Storage](https://cloud.google.com/storage/) (GCS), moving about 200TB of data off our Azure-based file servers into GCS. Doing this reduced the risk and the scale of the Geo migration.\n\nThe steps for the migration were now fairly straightforward:\n\n*   Set up a Geo secondary in GCP.\n*   Provision the new environment with Omnibus.\n*   Replicate all the data from GitLab.com in Azure to GCP.\n*   Test the new environment and verify all the data is correct.\n*   Failover to the GCP environment and promote it to primary.\n\nThere was only one major unknown left in this plan: The actual failover operation itself.\n\nUnfortunately, **Geo didn’t support a failover operation**, and nobody knew exactly how to do it. It was essential that we executed this perfectly, so we used our value of iteration to get it right.\n\n![GitLab failover procedure issue template](https://about.gitlab.com/images/blogimages/gitlab-journey-from-azure-to-gcp/issue_template.png){: .medium.center}\n\n*   We set up the failover procedure as an issue template in the GitLab migration issue tracker with each step as a checklist item.\n*   Every time we practiced, we created a new issue from the template and followed the checklist step by step.\n*   After each failover, we would review and consider how we could improve the process.\n*   We would submit these changes as merge requests to the issue template.\n\nThe merge requests were thoroughly reviewed before being approved by the team and through this very tight, iterative feedback loop, the checklist grew to cover every possible scenario we experienced. In the beginning, things almost never went according to plan, but with each iteration, we got better. In the end, there were _over 140 changes_ in that document before we felt confident enough to move forward with the failover. We let Google know and an amazing team was assembled to help us. The failover went smoothly and we didn't experience any major problems.\n\n## Results\n\nGoing back to the goals of the project: Did we make GitLab.com suitable for mission-critical workloads? Firstly, let's consider availability on GitLab.com.\n\n![GitLab Pingdom chart](https://about.gitlab.com/images/blogimages/gitlab-journey-from-azure-to-gcp/errors_per_day.png){: .shadow.medium.center}\n\nThis [Pingdom](https://www.pingdom.com/) graph shows the number of errors we saw per day, first in Azure and then in GCP. The average for the pre-migration period was 8.2 errors per day, while post-migration it’s down to **just one error a day**.\n\n![GitLab availability](https://about.gitlab.com/images/blogimages/gitlab-journey-from-azure-to-gcp/gitlab_availability.png){: .shadow.medium.center}\n\nLeading up to the migration, our availability was 99.61 percent. [In our October update](/blog/gitlab-com-stability-post-gcp-migration/) we were at 99.88 percent. As of April 2019, we've improved to **99.93 percent** and are on track to reach our target of 99.95 percent availability.\n\n![GitLab latency chart](https://about.gitlab.com/images/blogimages/gitlab-journey-from-azure-to-gcp/latency.png){: .shadow.medium.center}\n\nThis latency histogram compares the site performance of GitLab.com before and after moving to GCP. We took data for one week before the migration and one week after the migration. The GCP line shows us that the latencies in GCP drop off quicker, which means GitLab.com is not only faster, it’s more predictable, with fewer outlier values taking an unacceptably long time.\n\n[GitLab users have also noticed the increased stability](https://www.reddit.com/r/gitlab/comments/9f71nq/thanks_gitlab_team_for_improving_the_stability_of/), which is an encouraging sign that we've taken steps in the right direction.\n\nIt's important to note that these improvements can't be attributed to the migration alone – we explore some other contributing factors in [our October update](/blog/gitlab-com-stability-post-gcp-migration/).\n\n\n## What we learned\n\n* Having this amount of visibility into a large-scale migration project is pretty unusual, but it gave us an opportunity to put our values to the test. By opening our documentation to the world, we can collaborate and help others on their own migration journey.\n*  Working by our values gave us the ability to get the quick feedback we needed. Even though we weren’t able to use GitLab on Kubernetes during the migration, we course-corrected and came up with the right solutions.\n* We were able to see exactly how Google developers work and got an up-close look into how one of the fastest-moving companies in the world actually manages its [DevOps lifecycle](/topics/devops/). This knowledge will have a long-term impact on GitLab and how we support these organizations in the future.\n\nIf you would like to learn more about how we migrated to GCP, feel free to take a look at the **[issue tracker](https://gitlab.com/gitlab-com/migration/)** and our **[project documentation](http://bit.ly/2UrlU4s)**.\n",[1149,727,1150,9],{"slug":1367,"featured":6,"template":688},"gitlab-journey-from-azure-to-gcp","content:en-us:blog:gitlab-journey-from-azure-to-gcp.yml","Gitlab Journey From Azure To Gcp","en-us/blog/gitlab-journey-from-azure-to-gcp.yml","en-us/blog/gitlab-journey-from-azure-to-gcp",{"_path":1373,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1374,"content":1379,"config":1385,"_id":1387,"_type":13,"title":1388,"_source":15,"_file":1389,"_stem":1390,"_extension":18},"/en-us/blog/gitlab-kubernetes-agent-on-gitlab-com",{"title":1375,"description":1376,"ogTitle":1375,"ogDescription":1376,"noIndex":6,"ogImage":1319,"ogUrl":1377,"ogSiteName":675,"ogType":676,"canonicalUrls":1377,"schema":1378},"A new era of Kubernetes integrations on GitLab.com","The GitLab Agent for Kubernetes enables secure deployments from GitLab SaaS to your Kubernetes cluster and provides deep integrations of your cluster to GitLab.","https://about.gitlab.com/blog/gitlab-kubernetes-agent-on-gitlab-com","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"A new era of Kubernetes integrations on GitLab.com\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2021-02-22\",\n      }",{"title":1375,"description":1376,"authors":1380,"heroImage":1319,"date":1381,"body":1382,"category":1004,"tags":1383},[765],"2021-02-22","\n\nThe GitLab Agent for Kubernetes (\"Agent\", for short) provides a secure connection between a GitLab instance and a Kubernetes cluster and allows pull-based deployments to receive alerts based on the network policies. We released the first version of the Agent back in September on self-managed GitLab instances. We are happy to announce that the Agent is available on GitLab SaaS, GitLab.com, and has many more features coming soon.\n\nIf you run into any issues with the Agent or would like to provide feedback, please, [contribute in the Agent epic](https://gitlab.com/groups/gitlab-org/-/epics/3329).\n{: .alert .alert-warning}\n\n## Why a new era?\n\nBefore, the recommended way to attach a cluster to GitLab was to provide the cluster certificates and to open up the Kube API to GitLab.com. To get the most out of the integrations, we recommended attaching the cluster with `cluster-admin` rights, so GitLab could provision new namespaces and create review apps. But many users found this to be overly risky and instead rolled out custom integrations that were often built around the GitLab Runner. We want to simplify and support security-minded users with the GitLab Agent for Kubernetes and provide them with a safe, reliable, and future-proof integration solution between GitLab and their clusters. The GitLab Agent provides a secure connection between the cluster and GitLab. Access rights can be controlled with the Agent more tightly by our users, and we consider it to be the basis for future Kubernetes integrations with GitLab.\n\nWhen Kubernetes was just starting to get popular, our initial approach served new Kubernetes users well. At the same time, providing `cluster-admin` rights is not an option for many current users with experienced Site Reliability Engineers (SREs) and Platform Engineers on board. In the past few years, thanks to the certificate-based integrations, we have learned a lot about the needs of GitLab users, and we are leveraging these learnings with the Agent.\n\n## How does the Agent work?\n\nThe Agent provides a permanent connection using websockets or gRPC between a Kubernetes cluster and a GitLab instance. Since we want to keep the cluster-side component minimal and lightweight, we imagine multiple Agents being installed into the same cluster with different access levels. Still, this integration is complex. To understand how the Agent works, let me first introduce its major components. The whole Agent experience is made possible primarily by two components that we call `agentk` and `kas` (short for GitLab Agent Server). `agentk` is the cluster-side component that has to be deployed in the cluster, while `kas` is the GitLab server-side component that is managed alongside GitLab. Since we want to keep the cluster-side component as slim as possible, `kas` is responsible for much of the heavy lifting.\n\nThe Agent is configured in code, then registered with GitLab through an access token. Once installed in the cluster, `agentk` receives the access token and the `kas` endpoint and authenticates itself with GitLab. Subsequently, it retrieves its own configuration from GitLab, and keeps a connection open between `kas` and the cluster. This way both the agent and GitLab can send messages and receive information from the other party through a secure connection. This approach also allows a Kubernetes cluster sitting behind a firewall to be securely integrated with GitLab.com.\n\n## Getting started\n\n### About the Agent's availability\n\nIf you would like to try out the Agent on GitLab.com, `kas` is already installed and is managed by our SRE team. Before making the Agent generally available, we want to make sure that Agent-based workflows won't harm the performance of GitLab.com. This is why, at this time, `kas` is only available for select customers and projects. If you would like to try it out, [reach out to me](/company/team/#nagyv-gitlab) in e-mail or by mentioning me in an issue with your project ID, and we will authorize your project.\n\nGitLab's `kas` instance is available at `wss://kas.gitlab.com`. You will have to provide this value together with a registered agent access token when you deploy `agentk` to your cluster. You can [follow the installation instructions from our documentation](https://docs.gitlab.com/ee/user/clusters/agent/#define-a-configuration-repository) starting with defining a configuration repository.\n\n### How deployments work\n\nIf you prefer a video walk-through, we demonstrate how pull-based deployments work with the Agent below.\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/17O_ARVaRGo\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\nFor deployments, we share some codebase with ArgoCD since this part of the Agent is built on the [gitops-engine](https://github.com/argoproj/gitops-engine/). The `gitops-engine` provides a simple tool to keep git repositories synced with cluster resources. The Agent is configured in code. What we call the \"agent configuration project\" references the repositories containing the Kubernetes manifests which are the resource definitions describing the expected state of your cluster. Whenever these manifests change, the Agent automatically pulls the new configuration and applies it in the cluster.\n\n#### An example using Helm\n\nToday, the GitLab Agent for Kubernetes only supports pull-based deployments, but we are working on connecting it with GitLab CI to also provide push-based deployment support. So far, we have created a simple example repository that shows how someone might use the Agent together with Helm to install the GitLab Runner in their cluster.\n\nOne critique of Helm is that you might get different deployments without changing anything in the code you manage. We want to make sure that your manifest projects reflect what is expected to be deployed in your cluster. This is why we recommend that you use GitLab CI to generate and commit the final Kubernetes manifests from your preferred templating tool, and let the Agent take care of deploying the rendered templates. We follow this pattern in the example repository too.\n\n### Kubernetes network security alerts\n\nIn [GitLab 13.9](/releases/2021/02/22/gitlab-13-9-released/) we are [shipping an integration with Cilium built on top of the Agent](/releases/2021/02/22/gitlab-13-9-released/#configmap-support-for-kubernetes-agent-server). The integration provides a simple way to generate network policy-related alerts and to surface those alerts in GitLab. Watch the video below for a demo:\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/mFpXUvcAT1g\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n## Ongoing developments\n\nWhile we think that the Agent can already bring great value to Silver and Gold-level GitLab users, we are working constantly to build even more features on top of it.\n\nOur primary focus now is to make the Agent generally available on GitLab.com SaaS. We are also working on a set of features that allows a user to connect GitLab CI with clusters securely using the Agent. This allows existing push-based deployments to start easily using the Agent and the integrations coming with it.\n\nWe are excited to see how you will benefit from the Agent and what amazing things you will build with it.\n\n## Read more on Kubernetes:\n\n- [How to install and use the GitLab Kubernetes Operator](/blog/gko-on-ocp/)\n\n- [Threat modeling the Kubernetes Agent: from MVC to continuous improvement](/blog/threat-modeling-kubernetes-agent/)\n\n- [How to deploy the Agent with limited permissions](/blog/setting-up-the-k-agent/)\n\n- [Understand Kubernetes terminology from namespaces to pods](/blog/kubernetes-terminology/)\n\n- [What we learned after a year of GitLab.com on Kubernetes](/blog/year-of-kubernetes/)\n",[9,1384,1288],"git",{"slug":1386,"featured":6,"template":688},"gitlab-kubernetes-agent-on-gitlab-com","content:en-us:blog:gitlab-kubernetes-agent-on-gitlab-com.yml","Gitlab Kubernetes Agent On Gitlab Com","en-us/blog/gitlab-kubernetes-agent-on-gitlab-com.yml","en-us/blog/gitlab-kubernetes-agent-on-gitlab-com",{"_path":1392,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1393,"content":1398,"config":1403,"_id":1405,"_type":13,"title":1406,"_source":15,"_file":1407,"_stem":1408,"_extension":18},"/en-us/blog/gitlab-pages-update",{"title":1394,"description":1395,"ogTitle":1394,"ogDescription":1395,"noIndex":6,"ogImage":1140,"ogUrl":1396,"ogSiteName":675,"ogType":676,"canonicalUrls":1396,"schema":1397},"Update about GitLab Pages","If you are using GitLab Pages with a custom domain, you may need to update your DNS.","https://about.gitlab.com/blog/gitlab-pages-update","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Update about GitLab Pages\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"David Smith\"}],\n        \"datePublished\": \"2018-08-28\",\n      }",{"title":1394,"description":1395,"authors":1399,"heroImage":1140,"date":1400,"body":1401,"category":683,"tags":1402},[1145],"2018-08-28","\n\nAfter completing our move to Google Cloud Platform (GCP) on August 11, 2018, GitLab.com traffic has been served from our new infrastructure in GCP. For GitLab Pages users, we left a proxy in place in Azure to be backwards compatible for those Pages users who had an A record pointing to the IP Address at our Azure location.\n\nWe had planned a graceful window to let people have time to migrate their DNS records.  In our [July GCP move update](/blog/gcp-move-update/), we referenced the new IP address at GCP that people should use.\n\nIn that transition, users should have moved their DNS records from 52.167.214.135 to 35.185.44.232.\n\nThis week, we started cleanup of parts of our now legacy Azure infrastructure. Unfortunately, that cleanup also caught up the Azure load balancer that had the old 52.167.214.135 IP address for the GitLab pages proxy. We quickly filed a ticket to see if we could reclaim the IP address, but could not be guaranteed that we could get it back when we rebuilt the load balancer. This post is to get the information out for those Pages users who have been affected by this change.\n\n### What you need to know:\n\nIf you are using GitLab Pages with a custom domain AND you have an A record in DNS that points to the old Azure IP, you will need to update your DNS:\n\n|from IP (old)|to IP (new)|\n|",[1149,727,1150,9],{"slug":1404,"featured":6,"template":688},"gitlab-pages-update","content:en-us:blog:gitlab-pages-update.yml","Gitlab Pages Update","en-us/blog/gitlab-pages-update.yml","en-us/blog/gitlab-pages-update",{"_path":1410,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1411,"content":1417,"config":1424,"_id":1426,"_type":13,"title":1427,"_source":15,"_file":1428,"_stem":1429,"_extension":18},"/en-us/blog/gitops-done-3-ways",{"title":1412,"description":1413,"ogTitle":1412,"ogDescription":1413,"noIndex":6,"ogImage":1414,"ogUrl":1415,"ogSiteName":675,"ogType":676,"canonicalUrls":1415,"schema":1416},"3 Ways to approach GitOps","Learn about how GitLab users can employ GitOps to cover both Kubernetes and non-Kubernetes environments","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749669635/Blog/Hero%20Images/gitops-cover.jpg","https://about.gitlab.com/blog/gitops-done-3-ways","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"3 Ways to approach GitOps\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Saumya Upadhyaya\"},{\"@type\":\"Person\",\"name\":\"Dov Hershkovitch\"}],\n        \"datePublished\": \"2021-04-27\",\n      }",{"title":1412,"description":1413,"authors":1418,"heroImage":1414,"date":1421,"body":1422,"category":683,"tags":1423},[1419,1420],"Saumya Upadhyaya","Dov Hershkovitch","2021-04-27","\n\nThe term [\"GitOps\"](/topics/gitops/) first emerged in the Kubernetes community as a way for organizations to enable Ops teams move at the pace of application development. With improved automation and less risk, GitOps is quickly becoming the workflow of choice for infrastructure automation.\n\nAt GitLab, the approach to GitOps goes beyond Kubernetes. Before the buzz around GitOps picked up in the DevOps community, GitLab users and customers were applying GitOps principles to all types of infrastructure, including physical servers, virtual machines, containers, and Kubernetes clusters ([multicloud](/topics/multicloud/) and on-premise).\n\n## What is GitOps?\n\nThere are two main [approaches to GitOps](https://www.gitops.tech/), a push-based approach and a pull-based approach.\n\n- *Push-based approach*: A CI/CD tool pushes the changes to the environment. Applying GitOps via push is consistent with the approach used for application deployment. In this case, deployment targets for a push-based approach are not limited to Kubernetes.\n![push based deployment](https://about.gitlab.com/images/blogimages/gitops-push.png){: .shadow.medium.center}\nHow the push-based approach works for GitOps.\n{: .note.text-center}\n\n- *Pull-based approach*: An agent installed in a cluster pulls changes whenever there is a deviation from the desired configuration. In the pull-based approach, deployment targets are limited to Kubernetes and an agent must be installed in each Kubernetes cluster.\n![pull based deployment](https://about.gitlab.com/images/blogimages/gitops-pull.png){: .shadow.medium.center}\nHow the pull-based approach works for GitOps.\n{: .note.text-center}\n\n## How to employ GitOps principles using GitLab\n\nGitLab supports both of the approaches mentioned above, which can be used with and without a Kubernetes agent. Along with the [recently introduced Kubernetes agent](/blog/gitlab-kubernetes-agent-on-gitlab-com/), GitLab supports GitOps principles by supporting a three types of deployment targets and environments: The single application for infrastructure code; configurations using CI/CD for automation; and merge requests for collaboration and controls.\n\nBelow we unpack three methods for applying GitOps principles using GitLab technology.\n\n### Push using manually configured CI/CD release targets\n\nThe infrastructure configurations are stored in git. The user sets up the [supported deployment targets](/install/) and uses the standard CI/CD workflow to push infrastructure changes. To ensure the desired state in the repository is consistent with the environment, CI/CD will need to run on a regular schedule to identify drift and reconcile as required. Manual intervention may be required at times to cater to failed pipelines. Many GitLab users have been using this approach to push infrastructure changes to their test, staging, and production environments.\n\nThe manual push approach is ideal for both Kubernetes and supported non-Kubernetes environments, such as embedded systems, on-premise servers, mainframes, virtual machines, or FaaS offerings.\n\n### Push using Terraform\n\nIn this approach, an out-of-the box [integration with Terraform](https://docs.gitlab.com/ee/user/infrastructure/) helps Terraform users seamlessly implement GitOps workflows using GitLab. Terraform manifests are stored in the Git repository where users can collaborate on changes within the merge requests. The Terraform plan reports can be displayed within the merge requests and the Terraform state can be stored using the GitLab-managed Terraform state backend. Everything is integrated into GitLab, which spares users from performing these tasks via third-party tools or integrations.\n\nThe push approach is ideal for both Kubernetes and non-Kubernetes deployment targets that are supported by Terraform.\n\n### Pull using a Kubernetes agent\n\nIn fall 2020, GitLab [introduced a Kubernetes agent](/blog/gitlab-kubernetes-agent-on-gitlab-com/) that initiates a secure web-socket connection from a Kubernetes cluster to a GitLab instance. There is a GitLab server component that polls for any repository changes on the server and informs the agent when there is a deviation between the desired state and the cluster environment. This process helps minimize the load on the cluster and network. Whenever a drift is detected the agent pulls the latest configurations from the git repository and updates the environment accordingly. This GitOps approach requires the Kubernetes agent to be installed on every Kubernetes cluster, which can be done with ease as the GitLab Agent for Kubernetes uses GitOps principles to install and update the agent as required. This GitOps method is ideal for Kubernetes environments only.\n\n![kubernetes agent](https://about.gitlab.com/images/blogimages/gitops-agent.png){: .shadow.medium.center}\nInside the pull-based approach using a Kubernetes agent.\n{: .note.text-center}\n\n### Up next: Push using a Kubernetes agent\n\nGitLab also aims to support GitOps is by using a push approach with a Kubernetes agent. The push based approach using manually configured Kubernetes target attaches a Kubernetes cluster to GitLab through a certificate exchange. This approach leverages the CI/CD workflow for infrastructure automation and is fairly straightforward, but it also introduces risk by opening up a firewall and using cluster admin rights for cluster integration. To overcome these challenges while leveraging the CI/CD workflow - the [push-based approach using the Kubernetes agent](https://gitlab.com/groups/gitlab-org/-/epics/5528) aims to reuse the web-socket interface to establish a secure connection between GitLab and the Kubernetes cluster and allows GitLab CI/CD to securely push changes using this interface. When available, this approach would also provide a migration path for users who are currently setting up the Kubernetes integration using a certificate exchange.\n\nThe third approach is ideal for Kubernetes environments only. When available, it can be used in conjuction with the pull-based approach to optimize the GitOps workflow.\n\n## Accelerate the SDLC with GitOps principles\n\nWhether you are using physical, virtual, containers, Kubernetes - on-prem or cloud-based infrastructures – GitLab uses GitOps principles a variety of ways to meet your team wherever it's at. GitLab supports many different options because we understand the typical organization has a mixed IT landscape, with various heterogeneous technologies in a number of different environments.\n\n***What’s your preferred approach to GitOps?*** Drop us a comment.\n\n## Learn more about GitOps at GitLab\n\nRead on to explore how GitLab works with different technologies to deliver a GitOps solution for every company at every stage.\n\n* ***Blog***: [A new era of Kubernetes integrations on GitLab.com](/blog/gitlab-kubernetes-agent-on-gitlab-com/)\n* ***Webcast***: [GitLab and HashiCorp - A holistic guide to GitOps and the Cloud Operating Model](/webcast/gitlab-hashicorp-gitops/)\n* ***Testimonial***: [Shaping a financial service’s cloud strategy using GitLab and Terraform](https://www.youtube.com/watch?v=2LF3eOoGV_o&list=PLFGfElNsQthb4FD4y1UyEzi2ktSeIzLxj&index=6)\n\nCover image by [Rodolfo Cuadros](https://unsplash.com/@rocua18) on [Unsplash](https://unsplash.com/photos/JKzgp6vhJ8M)\n{: .note}\n",[539,815,9,727],{"slug":1425,"featured":6,"template":688},"gitops-done-3-ways","content:en-us:blog:gitops-done-3-ways.yml","Gitops Done 3 Ways","en-us/blog/gitops-done-3-ways.yml","en-us/blog/gitops-done-3-ways",{"_path":1431,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1432,"content":1437,"config":1443,"_id":1445,"_type":13,"title":1446,"_source":15,"_file":1447,"_stem":1448,"_extension":18},"/en-us/blog/gitops-with-gitlab-auto-devops",{"title":1433,"description":1434,"ogTitle":1433,"ogDescription":1434,"noIndex":6,"ogImage":1338,"ogUrl":1435,"ogSiteName":675,"ogType":676,"canonicalUrls":1435,"schema":1436},"Connecting Kubernetes clusters to GitLab with Auto DevOps","This is the 6th article in a series of tutorials on how to do GitOps with GitLab","https://about.gitlab.com/blog/gitops-with-gitlab-auto-devops","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitOps with GitLab: Connecting GitLab with a Kubernetes cluster - Auto DevOps\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2022-02-08\",\n      }",{"title":1438,"description":1434,"authors":1439,"heroImage":1338,"date":1440,"body":1441,"category":683,"tags":1442},"GitOps with GitLab: Connecting GitLab with a Kubernetes cluster - Auto DevOps",[765],"2022-02-08","_It is possible to use GitLab as a best-in-class GitOps tool, and this blog\npost series is going to show you how. These easy-to-follow tutorials will\nfocus on different user problems, including provisioning, managing a base\ninfrastructure, and deploying various third-party or custom applications on\ntop of them. You can find the entire \"Ultimate guide to GitOps with GitLab\"\ntutorial series\n[here](/blog/the-ultimate-guide-to-gitops-with-gitlab/)._\n\n\nIn this article we will look at how one can use Auto DevOps with all its\nbells and whistles to easily manage deployments.\n\n\n## Prerequisites\n\n\nThis article builds upon the previous tutorials in this series. We will\nassume that you have a Kubernetes cluster connected to GitLab using the\nGitLab Agent for Kubernetes, and you understand how the CI/CD tunnel works.\n\n\nIf this is not the case, I recommend to follow the previous articles to have\na similar setup from where we will start today.\n\n\n## What is Auto DevOps\n\n\nAuto DevOps is GitLab's answer to the complexity of software application\ndelivery. It is a set of opinionated templates that can be used \"as-is\" or\ncan be used to fast-track your own pipeline building. For some setups it\nworks from testing through various security and compliance checks to canary\ndeployments. Even if you have a less supported setup, you should be able to\nreuse some of its components, from security linting to deployment.\n\n\nYou can read more about the various [features built into Auto DevOps in our\ndocumentation](https://docs.gitlab.com/ee/topics/autodevops/).\n\n\n## The plan for building and deploying a minimul application\n\n\nThe plan for this article is to build and deploy a minimal application. The\nfocus will be on showing how you can get started quickly, without any\nmodifications on the Auto Deploy pipelines.\n\n\nThis setup will use the already known CI/CD tunnel. There will be a separate\narticle that shows how to replace the \"Auto Deploy\" part of Auto DevOps with\nGitOps style deployments.\n\n\nIn this article, we will deploy a simple hello world application. This is\nnot a tutorial about Auto DevOps, so we will only focus on the setup needed\nwhen used together with the GitLab Agent for Kubernetes.\n\n\nYou can see the final repository under\nhttps://gitlab.com/gitlab-examples/ops/gitops-demo/hello-world-service/.\n\n\n## How to build the application\n\n\nIn this section we will create our super simple hello world application and\nput a Dockerfile beside it.\n\n\n1. Start a new project.\n\n1. Add `src/main.py` with the following content:\n    ```python\n    # From https://gist.github.com/davidbgk/b10113c3779b8388e96e6d0c44e03a74\n    import http.server\n    import socketserver\n    from http import HTTPStatus\n\n    class Handler(http.server.SimpleHTTPRequestHandler):\n        def do_GET(self):\n            self.send_response(HTTPStatus.OK)\n            self.end_headers()\n            self.wfile.write(b'Hello world')\n\n    httpd = socketserver.TCPServer(('', 5000), Handler)\n    httpd.serve_forever()\n    ```\n1. Create the `Dockerfile` with:\n   ```\n   FROM python:3.9.10-slim-bullseye\n\n   WORKDIR /app\n\n   COPY ./src .\n\n   EXPOSE 5000\n\n   CMD [ \"python\", \"main.py\" ]\n   ```\n1. Commit the change to the repository.\n\n\n## How to set up Auto DevOps\n\n\n1. [Share the CI/CD\ntunnel](https://docs.gitlab.com/ee/user/clusters/agent/work_with_agent.html)\nwith the hello-world project. Note, that the Agent configuration project amd\nthe application project should be in the same project hierarchy and the\nAgent configuration project needs to be higher in this hierarchy.\n    ```yaml\n    ci_access:\n      # This agent is accessible from CI jobs in projects in these groups\n      projects:\n        - id: \u003Cpath>/\u003Cto>/\u003Cyour>/\u003Cproject>\n    ```\n1. Find out the Kubernetes context name. The agent context name is\n`\u003Cnamespace>/\u003Cgroup>/\u003Cproject>:\u003Cagent-name>`. You can see the available\ncontexts in CI with the following job:\n    ```yaml\n    contexts:\n      stage: .pre\n      image:\n        name: bitnami/kubectl:latest\n        entrypoint: [\"\"]\n      script:\n        - kubectl config get-contexts \n    ```\n1. Create your `.gitlab-ci.yml` file to have Auto DevOps working:\n    ```yaml\n    include:\n        template: Auto-DevOps.gitlab-ci.yml\n\n    variables:\n        # KUBE_INGRESS_BASE_DOMAIN is the application deployment domain and should be set as a variable at the group or project level.\n        KUBE_INGRESS_BASE_DOMAIN: 74.220.23.215.nip.io\n        KUBE_CONTEXT: \"gitlab-examples/ops/gitops-demo/k8s-agents:demo-agent\"\n        KUBE_NAMESPACE: \"demo-agent\"\n\n        # Feel free to enable any of these\n        TEST_DISABLED: \"true\"\n        CODE_QUALITY_DISABLED: \"true\"\n        LICENSE_MANAGEMENT_DISABLED: \"true\"\n        BROWSER_PERFORMANCE_DISABLED: \"true\"\n        LOAD_PERFORMANCE_DISABLED: \"true\"\n        SAST_DISABLED: \"true\"\n        SECRET_DETECTION_DISABLED: \"true\"\n        DEPENDENCY_SCANNING_DISABLED: \"true\"\n        CONTAINER_SCANNING_DISABLED: \"true\"\n        DAST_DISABLED: \"true\"\n        REVIEW_DISABLED: \"true\"\n        CODE_INTELLIGENCE_DISABLED: \"true\"\n        CLUSTER_IMAGE_SCANNING_DISABLED: \"true\"\n        POSTGRES_ENABLED: \"false\"\n    ```\n1. Commit the changes.\n\n\nAs you can see, I disabled many Auto DevOps functionalities in the above CI\nYAML. I did this for two reasons:\n\n\n1. Some of these features require a Premium or Ultimate license or tests in\nthe repo. I wanted to keep this tutorial \"stable\" for everyone.\n\n1. Every use case differs a little bit and Auto DevOps allows a large number\nof customizations. I wanted to highlight this by showing you the most basic\nones. Read more about [customizing Auto\nDevOps](https://docs.gitlab.com/ee/topics/autodevops/customize.html). If you\nwould like [Review Apps](https://docs.gitlab.com/ee/ci/review_apps/)\nsupport, just remove the `REVIEW_DISABLED` line.\n\n\nThere are actually only three settings to get the Auto DevOps pipeline up\nand running:\n\n\n- The `KUBE_CONTEXT` specifies the context used for the connection, it's\nprovided by the GitLab Agent for Kubernetes.\n\n- The `KUBE_NAMESPACE` specifies the Kubernetes namespace to target with the\ndeployments. This namespace will be used as we apply the Helm charts used\nbehind the hood.\n\n- The `KUBE_INGRESS_BASE_DOMAIN` sets up an Ingress and enables user\nfriendly access to the deployed service. \n\n\n## Recap\n\n\nA very common setup I see with GitLab customers is that the development team\nis responsible for writing the application code and packaging it into a\nDocker container. During this process, they take care of basic testing as\nwell, but they are not familiar with all the security and compliance\nrequirements or the deployment pipelines used within the company. The\npresented setup and the Auto DevOps suite of templates serves these teams.\nAs you can see, the teams need minimal GitLab CI setup to run a complex\npipeline that can take care of many of their requirements.\n\n\n## What's next\n\n\nIn the next article, I will show you how to deploy an application project\nwith a GitOps style workflow.\n\n\n_[Click here](/blog/the-ultimate-guide-to-gitops-with-gitlab/)\nfor the next tutorial._\n",[539,9,748],{"slug":1444,"featured":6,"template":688},"gitops-with-gitlab-auto-devops","content:en-us:blog:gitops-with-gitlab-auto-devops.yml","Gitops With Gitlab Auto Devops","en-us/blog/gitops-with-gitlab-auto-devops.yml","en-us/blog/gitops-with-gitlab-auto-devops",{"_path":1450,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1451,"content":1456,"config":1461,"_id":1463,"_type":13,"title":1464,"_source":15,"_file":1465,"_stem":1466,"_extension":18},"/en-us/blog/gitops-with-gitlab-connecting-the-cluster",{"title":1452,"description":1453,"ogTitle":1452,"ogDescription":1453,"noIndex":6,"ogImage":1338,"ogUrl":1454,"ogSiteName":675,"ogType":676,"canonicalUrls":1454,"schema":1455},"GitOps with GitLab: Connect with a Kubernetes cluster","In our third article in our GitOps series, learn how to connect a Kubernetes cluster with GitLab for pull and push-based deployments.","https://about.gitlab.com/blog/gitops-with-gitlab-connecting-the-cluster","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitOps with GitLab: Connect with a Kubernetes cluster\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2021-11-18\",\n      }",{"title":1452,"description":1453,"authors":1457,"heroImage":1338,"date":1458,"body":1459,"category":683,"tags":1460},[765],"2021-11-18","_It is possible to use GitLab as a best-in-class GitOps tool, and this blog\npost series is going to show you how. These easy-to-follow tutorials will\nfocus on different user problems, including provisioning, managing a base\ninfrastructure, and deploying various third-party or custom applications on\ntop of them. You can find the entire \"Ultimate guide to GitOps with GitLab\"\ntutorial series\n[here](/blog/the-ultimate-guide-to-gitops-with-gitlab/)._\n\n\n## GitOps with GitLab: connecting a Kubernetes cluster\n\n\nThis [GitOps](/topics/gitops/) with GitLab post shows how to connect a\nKubernetes cluster with GitLab for pull and push based deployments and easy\nsecurity integrations. In order to do so, the following elements are\nrequired:\n\n\n- A Kubernetes cluster that you can access and can create new resources,\nincluding `Role` and `RoleBinding` in it. \n\n- You will need `kubectl` and your local environment configured to access\nthe beforementioned cluster.\n\n- (Optional, recommended) Terraform and a Terraform project set up as shown\n[in the previous\narticle](/blog/gitops-with-gitlab-infrastructure-provisioning/)\nto retrieve an agent registration token from GitLab.\n\n- (Optional, recommended) `kpt` and `kustomize` to install the Agent into\nyour cluster.\n\n- (Optional, quickstart) If you prefer a less \"gitopsy\" approach, you will\nneed `docker` (Docker Desktop is not needed). This is simpler to follow, but\nprovides less control to you.\n\n\n## How to connect a cluster to GitLab\n\n\nThere are many ways how one can connect a cluster to GitLab:\n\n\n- you can set up a `$KUBECONTEXT` variable manually, manage all the related\nconnections and use GitLab CI/CD to push changes into your cluster\n\n- you can use a 3rd party tool, like\n[ArgoCD](https://argo-cd.readthedocs.io/en/stable/) or\n[Flux](https://fluxcd.io) to get pull based deployments\n\n- you can use the legacy, certificate-based cluster integration within\nGitLab in which case GitLab will manage the `$KUBECONTEXT` for you and you\ncan get easy metrics, log and monitoring integrations\n\n- or you can use the recommended approach, the [GitLab Agent for\nKubernetes](https://docs.gitlab.com/ee/user/clusters/agent/), to have pull\nand push based deployment support, network security policy integrations and\nthe possibility of metrics and monitoring too\n\n\nWe are going to focus on the Agent-based setup here as we believe that it\nserves and will serve our users best, hopefully you included.\n\n\n## How does the Agent work\n\n\nThe Agent has a component that needs to be installed into your cluster. We\ncall this component `agentk`. Once `agentk` is installed it reaches out to\nGitLab, and authenticates itself with an access token. So, the first step is\nto get a token from GitLab. We call this step \"the Agent registration.\" If\nthe authentication succeeds, `agentk` sets up a bidirectional GRPC channel\nbetween itself and GitLab. The emphasis here is on \"bidirectional.\" This\nenables requests and messages to be sent by either side and provides the\npossibility of much deeper integrations than the other approaches while\nstill being a nice citizen within your cluster.\n\n\nOnce the connection is established, the Agent retrieves its own\nconfiguration from GitLab. This configuration is a `config.yaml` file under\na repository, and you actually register the location of this configuration\nfile when you register a new Agent. The configuration describes the various\ncapabilities enabled of an Agent.\n\n\nOn the GitLab side, `agentk` communicates with - what we call - the\nKubernetes Agent Server, or `kas`. As most users do not have to deal with\nsetting up `kas`, I won't write about it here. You need to be a GitLab\nadministrator [to set up and manage\n`kas`](https://docs.gitlab.com/ee/administration/clusters/kas.html). If you\nare on gitlab.com, `kas` is available to you at `kas.gitlab.com`, thanks to\nour amazing SRE team.\n\n\nSo the steps we are going to take in this article are the following:\n\n\n1. Create a configuration file for the Agent\n\n1. Register the Agent and retrieve its authentication token\n\n1. Install `agentk` into the cluster together with the token\n\n\nFinally, we will set up an example pull-based deployment just to test that\neverything worked as expected. Let's get started!\n\n\n## How many Agents do you need for a larger setup\n\n\nWe recommend having a separate Agent registered at least against each of\nyour environments. If you have multiple clusters, have at least one agent\nregistered with each cluster. While it is possible to have many `agentk`\ndeployments with the same authentication token and thus configuration file,\nthis is not supported and might lead to syncronization problems!\n\n\nThe different agent configurations can use the same Kubernetes manifests for\ndeployments. So maintaining a multi-region cluster where all the clusters\nshould be identical does not require much effort. \n\n\nWe designed `agentk` to be very lightweight so you should not worry about\ndeploying multiple instances of it into a cluster. \n\n\nWe know users who use separate `agentk` instances by squad for example. In\nthese situations, the `squad` owns some namespaces in the cluster and each\nAgent can access only the namespaces available for their squad. This way\n`agentk` is not just a good citizen in your cluster, but is like a team\nmember in your squad.\n\n\n## Create a configuration file for the Agent\n\n\nNote:\n\nYou can use either the Terraform project from the previous step or start\nwith a new project. I will assume that we build on top of the Terraform\nsetup from the previous article, linked above, that will come in handy when\nwe want to register the Agent using Terraform. I won't go through setting up\nall the environment variables here for local Terraform run.\n\n\nDecide about your agent name, and create an empty file in your project under\n`.gitlab/agents/\u003Cyour agent name>/config.yaml`. Nota bene, that the\nextension is `yaml` not `yml` and your agent name must follow the [DNS label\nstandard from RFC\n1123](https://docs.gitlab.com/ee/user/clusters/agent/install/#create-an-agent-configuration-file).\nI'll call my agent `demo-agent`, so the file is under\n`.gitlab/demo-agent/config.yaml`.\n\n\n## Register the Agent\n\n\nThe next step is to register the Agent with GitLab. You can do this either\nthrough the GitLab UI or using Terraform. I will show you both approaches.\n\n\n### Registering through the UI\n\n\nOnce the configuration file is in place, visit `Infrastructure/Kubernetes`\nand add a new cluster using the Agent. A dialog will pop up where you can\nselect your agent.\n\n\nOnce you hit \"next,\" you will see the registration token and a `docker`\ncommand for easy installation. The `docker` command includes the token too\nand you can run it to quickly set up an `agentk` inside of your cluster.\n(You might need to create a namespace first!) Feel free to run the command\nfor a quickstart or follow the tutorial for a truly code-based approach.\n\n\n### Registering through code\n\n\nWe will use Terraform to register the Agent through code. Let's create the\nfollowing files:\n\n\n- Under `terraform/gitlab-agent/main.tf`\n\n\n```hcl\n\nterraform {\n  backend \"http\" {\n  }\n  required_version = \">= 0.13\"\n  required_providers {\n    gitlab = {\n      source = \"gitlabhq/gitlab\"\n      version = \"~>3.6.0\"\n    }\n  }\n}\n\n\nprovider \"gitlab\" {\n    token = var.gitlab_password\n}\n\n\nmodule \"gitlab_kubernetes_agent_registration\" {\n  source = \"gitlab.com/gitlab-org/kubernetes-agent-terraform-register-agent/local\"\n  version = \"0.0.2\"\n\n  gitlab_project_id = var.gitlab_project_id\n  gitlab_username = var.gitlab_username\n  gitlab_password = var.gitlab_password\n  gitlab_graphql_api_url = var.gitlab_graphql_api_url\n  agent_name = var.agent_name\n  token_name = var.token_name\n  token_description = var.token_description\n}\n\n```\n\n\nAs you can see we will use a module here. The module is hosted using the\nTerraform registry provided by GitLab. You can check out [the module source\ncode\nhere](https://gitlab.com/gitlab-org/configure/examples/kubernetes-agent-terraform-register-agent).\nYou might have guessed correctly that under the hood the module uses the\nGitLab GraphQL API to register the agent and retrieve a token. We will need\nto set up variables for it to work.\n\n\n- Create `terraform/gitlab-agent/variables.tf`\n\n\n```hcl\n\nvariable \"gitlab_project_id\" {\n  type = string\n}\n\n\nvariable \"gitlab_username\" {\n  type = string\n}\n\n\nvariable \"gitlab_password\" {\n  type = string\n}\n\n\nvariable \"agent_name\" {\n  type = string\n}\n\n\nvariable \"token_name\" {\n  type    = string\n  default = \"kas-token\"\n}\n\n\nvariable \"token_description\" {\n  type    = string\n  default = \"Token for KAS Agent Authentication\"\n}\n\n\nvariable \"gitlab_graphql_api_url\" {\n  type    = string\n  default = \"https://gitlab.com/api/graphql\"\n}\n\n```\n\n\n- Create `terraform/gitlab-agent/outputs.tf`\n\n\n```hcl\n\noutput \"agent_id\" {\n  value     = module.gitlab_kubernetes_agent_registration.agent_id\n}\n\n\noutput \"token_secret\" {\n  value     = module.gitlab_kubernetes_agent_registration.token_secret\n  sensitive = true\n}\n\n```\n\n\nOnce the registration is over, you'll be able to retrieve the agent ID and\nthe token using these Terraform outputs.\n\n\n### Run the Terraform project\n\n\nOnce the above code is in place, we need to run it to actually register the\nAgent. Here, I am going to extend the setup from the previous article.\n\n\n#### Running locally\n\n\n- Create `terraform/gitlab-agent/.envrc`  as you did for the network\nproject.\n\n\n```\n\nexport TF_STATE_NAME=${PWD##*terraform/}\n\nsource_env ../../.main.env\n\n```\n\n\nNow run Terraform\n\n\n```bash\n\nterraform init\n\nterraform plan\n\nterraform apply\n\n```\n\n\n#### Running from CI/CD pipeline\n\n\nExtend the `.gitlab-ci.yml` file with the following 3 jobs:\n\n\n```hcl\n\ngitlab-agent:init:\n  extends: .terraform:init\n  stage: init\n  variables:\n    TF_ROOT: terraform/gitlab-agent\n    TF_STATE_NAME: gitlab-agent\n  only:\n    changes:\n      - \"terraform/gitlab-agent/*\"\n\ngitlab-agent:review:\n  extends: .terraform:build\n  stage: build\n  variables:\n    TF_ROOT: terraform/gitlab-agent\n    TF_STATE_NAME: gitlab-agent\n  resource_group: tf:gitlab-agent\n  only:\n    changes:\n      - \"terraform/gitlab-agent/*\"\n\ngitlab-agent:deploy:\n  extends: .terraform:deploy\n  stage: deploy\n  variables:\n    TF_ROOT: terraform/gitlab-agent\n    TF_STATE_NAME: gitlab-agent\n  resource_group: tf:gitlab-agent\n  environment:\n    name: demo-agent\n  when: manual\n  only:\n    changes:\n      - \"terraform/gitlab-agent/*\"\n    variables:\n      - $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n```\n\n\nAs you can see these are the same jobs that we saw already, they are just\nparameterized for the `gitlab-agent` terraform project.\n\n\nNota bene, even if you use GitLab to register the Agent, you will need your\ncommand line to install `agentk` for the first time! As a result, you can\nnot avoid a local setup as you will need to run at least `terraform output`\nto retrieve the token!\n\n\n## Install `agentk`\n\n\nIn this tutorial we are going to follow [the advanced installation\ninstructions](https://docs.gitlab.com/ee/user/clusters/agent/install/index.html#advanced-installation)\nfrom the GitLab documentation. This approach is highly customizable using\n`kustomize` and `kpt`.\n\n\nFirst, let's retrieve the basic Kubernetes resource definitions using `kpt`:\n\n\n- Create a directory `packages` using `mkdir packages`\n\n- Run `kpt pkg get\nhttps://gitlab.com/gitlab-org/cluster-integration/gitlab-agent.git/build/deployment/gitlab-agent\npackages/gitlab-agent`\n\n\nThis will retrieve the most recent version of the `agentk` installation\nresources. You can request a tagged version with the well-known `@` syntax,\nfor example by running `kpt pkg get\nhttps://gitlab.com/gitlab-org/cluster-integration/gitlab-agent.git/build/deployment/gitlab-agent@v14.4.0\npackages/gitlab-agent`. You can see [all the available versions\nhere](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/tags).\n\n\n### Why `kpt` - could we make this a box?\n\n\nThe choice of `kpt` is because it allows sane upstream package management to\nyou. With `kpt` you will be able to regularly update your packages using\nsomething like `kpt pkg update packages/gitlab-agent@\u003Cnew version>\n--strategy=resource-merge`. It basically allows you to modify your package\nlocally, and will try to merge upstream changes into it. Read the `kpt pkg\nupdate -h` output for more information and alternative merge strategies.\n\n\n### Continue with the installation - if it's a box, this is not needed\n\n\nThe `kpt` packages you retrieved are actually a set up `kustomize` overlays.\nThe `base` defines only the `agentk` deployment and namespace; the `cluster`\ndefines some default RBAC around the deployment. Feel free to add your own\noverlays and use those. We will extend this package with custom overlays in\na part 6 of the series.\n\n\nTo configure the package, see the available configuration options using:\n\n\n```bash\n\nkustomize cfg list-setters packages/gitlab-agent\n        NAME                 VALUE               SET BY                  DESCRIPTION              COUNT   REQUIRED   IS SET  \n  agent-version       stable                 package-default   Image tag for agentk container     1       No         No      \n  kas-address         wss://kas.gitlab.com   package-default   kas address. Use                   1       No         No      \n                                                               grpc://host.docker.internal:8150                              \n                                                               if connecting from within Docker                              \n                                                               e.g. from kind.                                               \n  name-prefix                                                  Prefix for resource names          1       No         No      \n  namespace           gitlab-agent           package-default   Namespace to install GitLab        2       No         No      \n                                                               Kubernetes Agent into                                         \n  prometheus-scrape   true                   package-default   Enable or disable Prometheus       1       No         No      \n                                                               scraping of agentk metrics.                              \n```\n\n\nThe package default will be different if you used a tagged version for\ngetting the package. Let's set the version as using `stable` is not\nrecommended.\n\n\n```bash\n\nkustomize cfg set packages/gitlab-agent agent-version v14.4.1\n\nset 1 field(s) of setter \"agent-version\" to value \"v14.4.1\"\n\n```\n\n\nFeel free to adjust the other configuration options too or add you own\noverlays if that is needed.\n\n\n### Which agent-version to use - could we make this a box?\n\n\nIf possible the version of `agentk` should match the major and minor version\nof your GitLab instance. You can find our the version of your GitLab\ninstance under the Help menu on the UI.\n\n\nIf there is no agent version with your major and minor version, then pick\nthe agent with the highest major and minor below the version of your GitLab.\n\n\n### Continue with the installation - if it's a box, this is not needed\n\n\nWarning:\n\nBefore the next step, I want to warn you about never, ever committing\nunencrypted secrets into git, and the agent registration token is a secret!\n\n\nLet's retrieve the agent registration token from our Terraform project. Run\nthe following command in the `terraform/gitlab-agent` directory:\n\n\n```bash\n\nterraform output -raw token_secret >\n../../packages/gitlab-agent/base/secrets/agent.token\n\n```\n\n\nThis writes the registration token to a file on your local computer. Do not\ncommit these changes to git!\n\n\nAt this point, we are ready to deploy `agentk` into the cluster, so run:\n\n\n```bash\n\nkustomize build packages/gitlab-agent/cluster | kubectl apply -f -\n\n```\n\n\nLet's get rid of the secret:\n\n\n```bash\n\necho \"Invalid token\" > packages/gitlab-agent/base/secrets/agent.token\n\n```\n\n\nYou are good to commit your changes to `git` now!\n\n\n## Testing the setup\n\n\nWe have installed the Agent, now what? How can we start using it? In the\nnext article we will see in detail how to deploy a more serious application\ninto the cluster. Still, to check that cluster syncronization actually\nworks, let's deploy a `ConfigMap`.\n\n\n- Create `kubernetes/test_config.yaml` with the following content:\n\n\n```yaml\n\napiVersion: v1\n\nkind: ConfigMap\n\nmetadata:\n  name: gitlab-gitops\n  namespace: default\ndata:\n  key: It works!\n```\n\n\n- Modify your Agent configuration file under\n`.gitlab/demo-agent/config.yaml`, and add the following to it:\n\n\n```yaml\n\ngitops:\n  # Manifest projects are watched by the agent. Whenever a project changes,\n  # GitLab deploys the changes using the agent.\n  manifest_projects:\n  - id: path/to/your/project\n    default_namespace: gitlab-agent\n    # Paths inside of the repository to scan for manifest files.\n    # Directories with names starting with a dot are ignored.\n    paths:\n    - glob: 'kubernetes/test_config.yaml'\n    #- glob: 'kubernetes/**/*.yaml'\n```\n\n\nChange the `- id: path/to/your/project` line above to point to your\nproject's path!\n\n\nThe above configuration tells the Agent to kepp the\n`kubernetes/test_config.yaml` file in sync with the cluster. I've left a\ncommented line at the end to show how you could use wildcards. This will\ncome handy in future steps of this article. The`default_namespace` is used\nif no namespace is provided in the Kuberentes manifests. There are many\nother options to configure as well even for the `gitops` use case. You can\nread more about these in [the configuration file reference\ndocumentation](https://docs.gitlab.com/ee/user/clusters/agent/work_with_agent.html).\n\n\nOnce you commit the above changes, GitLab notifies `agentk` about the\nchanged files. First, `agentk` updates its configuration; second, it\nretrieves the `ConfigMap`.\n\n\nWait a few seconds, and run `kubectl describe configmap gitlab-gitops` to\ncheck that the changes got appliedd to your cluster. You should see\nsomething similar:\n\n\n```\n\nName:         gitlab-gitops\n\nNamespace:    default\n\nLabels:       \u003Cnone>\n\nAnnotations:  config.k8s.io/owning-inventory: 502-28431043\n              k8s-agent.gitlab.com/managed-object: managed\n\nData\n\n====\n\nkey:\n",[9,232,1288],{"slug":1462,"featured":6,"template":688},"gitops-with-gitlab-connecting-the-cluster","content:en-us:blog:gitops-with-gitlab-connecting-the-cluster.yml","Gitops With Gitlab Connecting The Cluster","en-us/blog/gitops-with-gitlab-connecting-the-cluster.yml","en-us/blog/gitops-with-gitlab-connecting-the-cluster",{"_path":1468,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1469,"content":1474,"config":1479,"_id":1481,"_type":13,"title":1482,"_source":15,"_file":1483,"_stem":1484,"_extension":18},"/en-us/blog/gitops-with-gitlab-infrastructure-provisioning",{"title":1470,"description":1471,"ogTitle":1470,"ogDescription":1471,"noIndex":6,"ogImage":1338,"ogUrl":1472,"ogSiteName":675,"ogType":676,"canonicalUrls":1472,"schema":1473},"GitOps with GitLab: Infrastructure provisioning with GitLab and Terraform","In part two of our GitOps series, we set up the infrastructure using GitLab and Terraform. Here's everything you need to know.","https://about.gitlab.com/blog/gitops-with-gitlab-infrastructure-provisioning","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitOps with GitLab: Infrastructure provisioning with GitLab and Terraform\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2021-11-04\",\n      }",{"title":1470,"description":1471,"authors":1475,"heroImage":1338,"date":1476,"body":1477,"category":683,"tags":1478},[765],"2021-11-04","\n\n_It is possible to use GitLab as a best-in-class GitOps tool, and this blog post series is going to show you how. These easy-to-follow tutorials will focus on different user problems, including provisioning, managing a base infrastructure, and deploying various third-party or custom applications on top of them. You can find the entire \"Ultimate guide to GitOps with GitLab\" tutorial series [here](/blog/the-ultimate-guide-to-gitops-with-gitlab/)._\n\nThis post focuses on setting up the underlying infrastructure using GitLab and Terraform. \n\nThe first step is to have a network and some computing instances that we can use as our Kubernetes cluster. In this project, I’ll use [Civo](https://www.civo.com) to host the infrastructure as it has the most minimal setup, but the same can be achieved using any of the hyperclouds. GitLab documentation provides examples on how to set up a [cluster on AWS](https://docs.gitlab.com/ee/user/infrastructure/clusters/connect/new_eks_cluster.html) or [GCP](https://docs.gitlab.com/ee/user/infrastructure/clusters/connect/new_gke_cluster.html).\n\nWe want to have a project that describes our [infrastructure as code (IaC)](/topics/gitops/infrastructure-as-code/). As Terraform is today the de facto standard in infrastructure provisioning, we’ll use Terraform for the task. Terraform requires a state storage backend; We will use the GitLab managed Terraform state that is very easy to get started. Moreover, we will set up a pipeline to run the infrastructure changes automatically if they are merged to the main branch.\n\n## What infrastructure related steps are we going to codify?\n\n1. Create a VPC\n2. Set up a Kubernetes cluster\n\nActually, we will create separate Terraform projects for these 3 steps under a single GitLab project. We split the infrastructure because in a real world scenario, these projects will likely be a bit bigger, and Terraform slows down quite a lot if it has to deal with big projects. In general, it is a good practice to have small Terraform projects, and think about the infrastructure in a layered way, where higher layers can reference the output of lower layers. There are [many ways to access the output of another Terraform project](https://www.terraform.io/docs/language/state/remote-state-data.html#alternative-ways-to-share-data-between-configurations), and we leave it up to the reader to learn more about these. In this case, we will use simple data resources.\n\nAfter this long intro, let’s get started!\n\n## Creating the network\n\nFirst, let’s create a new GitLab project. You can use either an empty project or any of the project templates. If you plan to do all these tutorials, I recommend starting with the [Cluster Management Project template](https://docs.gitlab.com/ee/user/clusters/management_project_template.html). Once the project is ready, let’s create the following files:\n\n- A `terraform/network/main.tf` file:\n\n```hcl\nterraform {\n  required_providers {\n    civo = {\n      source = \"civo/civo\"\n      version = \"0.10.10”\n    }\n  }\n  backend \"http\" {\n  }\n}\n\n# Configure the Civo Provider\nprovider \"civo\" {\n  token = var.civo_token\n  region = local.region\n}\n\nresource \"civo_network\" \"network\" {\n    label = \"development\"\n}\n```\n\nThis file describes almost everything we want this project to do. The first block configures Terraform to use the `civo/civo` provider and a simple `http` backend for state storage. As I mentioned above, we will use [the GitLab managed Terraform state](https://docs.gitlab.com/ee/user/infrastructure/iac/terraform_state.html), that acts like an `http` backend from Terraform’s point of view. The GitLab backend is versioned and encrypted by default, and GitLab CI/CD contains all the environment variables needed to access it. I will demonstrate later how you can access the backend either from the local command line or from GitLab CI/CD.\n\nNext we configure the `Civo` provider. You can see that here we use two variables, an input and a local variable. These will be defined in separate files below. Finally, we describe a network and give it the “development” label.\n\n- A `terraform/network/outputs.tf` file:\n\n```hcl\noutput \"network\" {\n  value = civo_network.network.id\n}\n```\n\nThis file just provides the network id as an output variable from Terraform. Other projects could consume it. We won’t use this, but I consider it a good practice as it might help to debug issues.\n\n- A `terraform/network/locals.tf` file:\n\n```hcl\nlocals {\n  region = \"LON1\"\n}\n```\n\nHere we define the `region` local as mentioned under the description of the `main.tf` file. Why aren’t we making it an input variable? Because this is closely related to our infrastructure and for this reason we want to keep it in code. It should be version controlled and changes should be reviewed following the team’s processes. We could write the values into a `.tfvars` file also to achieve versioning and have it as a variable. I prefer to keep it in `hcl` to have it closer to the rest of the code.\n\n- A `terraform/network/variables.tf` file:\n\n```hcl\nvariable \"civo_token\" {\n  type = string\n  sensitive = true\n}\n```\n\nFinally, we define the Civo access token as an input variable.\n\nNow, we are ready with the Terraform code, but we cannot access the GitLab state backend yet. For that we either need to configure our local environment or GitLab CI/CD. Let’s see both setups.\n\n## Running Terraform locally\n\nYou can run Terraform either locally or using GitLab CI/CD. The following two sections present both approaches.\n\n### Accessing the GitLab Terraform state backend locally\n\nThe simplest way to configure the “http” backend is using environment variables. There are many environment variables needed though! For this reason, I prefer to use a collection of [direnv](https://direnv.net/) files. We will need all these environment variables configured:\n\n```\nTF_HTTP_PASSWORD\nTF_HTTP_USERNAME\nTF_HTTP_ADDRESS\nTF_HTTP_LOCK_ADDRESS\nTF_HTTP_LOCK_METHOD\nTF_HTTP_UNLOCK_ADDRESS\nTF_HTTP_UNLOCK_METHOD\nTF_HTTP_RETRY_WAIT_MIN\n```\n\nDirenv enables us to add a few files to our repository to describe the above environment variables in a nice and scalable way. Clearly, there are some variables that are sensitive, like `TF_HTTP_PASSWORD`, so this should not be stored in git. Moreover, we could reuse most of these variables in the other two Terraform projects we are going to create. With these considerations in mind, let’s create the following 3 files:\n\n- Create `terraform/network/.envrc`: \n\n```\nexport TF_STATE_NAME=civo-${PWD##*terraform/}\nsource_env ../../.main.env\n```\n\nThis sets the `TF_STATE_NAME` variable to `civo-network` using some bash magic and loads the `.main.env` file from the root of the repository using the `source_env` method provided by `direnv`. This can be added to version control safely.\n\n- Create `.main.env`:\n\n```\nsource_env_if_exists ./.local.env\n\nCI_PROJECT_ID=28431043\nexport TF_HTTP_PASSWORD=\"${CI_JOB_TOKEN:-$GITLAB_ACCESS_TOKEN}\"\nexport TF_HTTP_USERNAME=\"${GITLAB_USER_LOGIN}\"\nexport GITLAB_URL=https://gitlab.com\n\nexport TF_VAR_remote_address_base=\"${GITLAB_URL}/api/v4/projects/${CI_PROJECT_ID}/terraform/state\"\nexport TF_HTTP_ADDRESS=\"${TF_VAR_remote_address_base}/${TF_STATE_NAME}\"\nexport TF_HTTP_LOCK_ADDRESS=\"${TF_HTTP_ADDRESS}/lock\"\nexport TF_HTTP_LOCK_METHOD=\"POST\"\nexport TF_HTTP_UNLOCK_ADDRESS=\"${TF_HTTP_LOCK_ADDRESS}\"\nexport TF_HTTP_UNLOCK_METHOD=\"DELETE\"\nexport TF_HTTP_RETRY_WAIT_MIN=5\n\n# export TF_LOG=\"TRACE\"\n```\n\nThis file contains the bulk of the environment variables we need, and can be added to version control safely as no secrets are stored there. The first line loads the `.local.env` file that will contain the sensitive values, again using a `direnv` method. The second line contains the GitLab project ID. This is shown under the project name of your GitLab project. The next three lines configure access to GitLab. The username and password will be populated from the `local.env` file, while the `GITLAB_URL` variable is there to help you if you are on a self-managed GitLab instance.\n\n- Create `.local.env` and add it to `.gitignore`:\n\n```\nGITLAB_ACCESS_TOKEN=\u003Cyour GitLab personal access token>\nGITLAB_USER_LOGIN=\u003Cyour GitLAb username>\nexport TF_VAR_civo_token=\u003Cyour Civo access token>\n```\n\nClearly, I cannot provide the values for this file. Please fill them out with your credentials. You can generate a GitLab personal access token under your settings. To access the GitLab managed Terraform state using a personal access token, the token should have the `api` scope enabled.\n\nWarning: **Don’t forget to add this file to `.gitignore`**. Actually, I have it in my global gitignore file to avoid accidental commits.\n\nAs the environment variables are set up, you should make direnv to start using these variables. When you `cd` into the `terraform/network` directory a warning should appear asking you to run `direnv allow`. Enable the environment variables:\n\n```\ncd terraform/network\ndirenv allow\n```\n\n### Creating the network - finally\n\nLet’s see if we managed to set up everything right!\n\n```\nterraform init\nterraform plan\n```\n\nThe first command just initializes Terraform, downloads the Civo plugin and does some sanity checks. The second command on the other hand connects to the remote state backend, and computes the necessary changes to provide the infrastructure we described in this project.\n\nIf we like the changes, we can apply them with\n\n```\nterraform apply\n```\n\n_Nota bene_, in a real world setup, you would likely output a plan file from `terraform plan` and feed it into `terraform apply`, just like the CI/CD setup will do it later. Anyway, this is good enough for us, so let’s create the cluster next.\n\n### Running Terraform using GitLab CI/CD\n\nNote: This section assumes that you have access to GitLab Runners to run the CI/CD jobs.\n\nGiven the flexibility of GitLab CI/CD it can be set up in many different ways. Here we will build a pipeline that incorporates the most important aspects of a Terraform-oriented pipeline, without restricting you to require merge requests or any other processes. The only restriction we'll place on it is that changes should only be applied on the main branch and this should be a manual action.\n\nCopy the following code into `.gitlab-ci.yml` in the root of your project:\n\n```yaml\ninclude:\n  - template: \"Terraform/Base.latest.gitlab-ci.yml\"\n\nstages:\n- init\n- build\n- deploy\n\nnetwork:init:\n  extends: .terraform:init\n  stage: init\n  variables:\n    TF_ROOT: terraform/network\n    TF_STATE_NAME: network\n  only:\n    changes:\n      - \"terraform/network/*\"\n\nnetwork:review:\n  extends: .terraform:build\n  stage: build\n  variables:\n    TF_ROOT: terraform/network\n    TF_STATE_NAME: network\n  resource_group: tf:network\n  only:\n    changes:\n      - \"terraform/network/*\"\n\nnetwork:deploy:\n  extends: .terraform:deploy\n  stage: deploy\n  variables:\n    TF_ROOT: terraform/network\n    TF_STATE_NAME: network\n  resource_group: tf:network\n  environment:\n    name: dns\n  when: manual\n  only:\n    changes:\n      - \"terraform/network/*\"\n    variables:\n      - $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n```\n\nThis CI pipeline re-uses [the latest base Terraform CI template](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Terraform) shipped with GitLab, and runs the jobs by simply parameterizing them as function calls. Let's review quickly the keys used:\n\n- the [`stages`](https://docs.gitlab.com/ee/ci/yaml/#stages) keyword provides a list of stages to compose the pipeline\n- the [`extends`](https://docs.gitlab.com/ee/ci/yaml/#extends) keyword refers to the job defined in [the base Terraform template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Terraform/Base.latest.gitlab-ci.yml)\n- the [`variables`](https://docs.gitlab.com/ee/ci/yaml/#variables) keywords parameterizes the job for our requirements\n- the [`resource_group`](https://docs.gitlab.com/ee/ci/yaml/#resource_group) keyword assures that always only one potentially conflicting job is run\n- the [`only`](https://docs.gitlab.com/ee/ci/yaml/#only--except) keyword restricts runs to specific situations\n\nIf you commit this file and push it to GitLab, a new pipeline will be created that as a last step provides you a manual job to create your network. We will extend this file later throughout this tutorial series.\n\n## Create a Kubernetes cluster\n\nThe code required for the cluster will be very similar to the code for the network.\n\n- Add `terraform/cluster/outputs.tf` file:\n\n```hcl\nterraform {\n  required_providers {\n    civo = {\n      source = \"civo/civo\"\n      version = \"0.10.4\"\n    }\n  }\n  backend \"http\" {\n  }\n}\n\n# Configure the Civo Provider\nprovider \"civo\" {\n  token = var.civo_token\n  region = local.region\n}\n\nresource \"civo_kubernetes_cluster\" \"dev-cluster\" {\n    name = \"dev-cluster\"\n    // tags = \"gitlab demo\"  // Do not add tags! There is a bug in the civo-provider :(\n    network_id = data.civo_network.network.id\n    applications = \"\"\n    num_target_nodes = 3\n    target_nodes_size = element(data.civo_instances_size.small.sizes, 0).name\n}\n```\n\nThe only difference compared to `terraform/network/outputs.tf` is the last resource as that describes the cluster. You can see how we reference the network created before. Of course, we'll need a `data` resource for this and the instance sizes.\n\n- Add `terraform/cluster/data.tf` file:\n\n```hcl\ndata \"civo_instances_size\" \"small\" {\n    filter {\n        key = \"name\"\n        values = [\"g3.small\"]\n        match_by = \"re\"\n    }\n\n    filter {\n        key = \"type\"\n        values = [\"instance\"]\n    }\n\n}\n\ndata \"civo_network\" \"network\" {\n    label = \"development\"\n}\n```\n\n\n- The `terraform/cluster/locals.tf` file outputs some useful details. We won't use them now, but they often come in handy in the longer term.\n\n```hcl\noutput \"cluster\" {\n  value = {\n    status = civo_kubernetes_cluster.dev-cluster.status\n    master_ip = civo_kubernetes_cluster.dev-cluster.master_ip\n    dns_entry = civo_kubernetes_cluster.dev-cluster.dns_entry\n  }\n}\n```\n\n- The `terraform/cluster/locals.tf` file is the same as for the network project:\n\n```hcl\nlocals {\n  region = \"LON1\"\n}\n```\n\n- The `terraform/cluster/variables.tf` file is the same as for the network project:\n\n```hcl\nvariable \"civo_token\" {\n  type = string\n  sensitive = true\n}\n```\n\n### Provision the cluster\n\nLet's see how can we extend the previous local and CI/CD setups to run this Terraform project!\n\n#### Running locally\n\n- Create `terraform/cluster/.envrc`  as you did for the network project:\n\n```\nexport TF_STATE_NAME=civo-${PWD##*terraform/}\nsource_env ../../.main.env\n```\n\nNow run Terraform:\n\n```bash\nterraform init\nterraform plan\nterraform apply\n```\n\n#### Running from CI/CD\n\nExtend the `.gitlab-ci.yaml` file with the following 3 jobs:\n\n```hcl\ncluster:init:\n  extends: .terraform:init\n  stage: init\n  variables:\n    TF_ROOT: terraform/cluster\n    TF_STATE_NAME: cluster\n  only:\n    changes:\n      - \"terraform/cluster/*\"\n\ncluster:review:\n  extends: .terraform:build\n  stage: build\n  variables:\n    TF_ROOT: terraform/cluster\n    TF_STATE_NAME: cluster\n  resource_group: tf:cluster\n  only:\n    changes:\n      - \"terraform/cluster/*\"\n\ncluster:deploy:\n  extends: .terraform:deploy\n  stage: deploy\n  variables:\n    TF_ROOT: terraform/cluster\n    TF_STATE_NAME: cluster\n  resource_group: tf:cluster\n  environment:\n    name: dev-cluster\n  when: manual\n  only:\n    changes:\n      - \"terraform/cluster/*\"\n    variables:\n      - $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n```\n\nAs you can see these are the same jobs that we saw already, they are just parameterized for the `cluster` Terraform project.\n\nOnce you push your code to GitLab, you cluster should be ready in a few minutes!\n\n_[Click here](/blog/the-ultimate-guide-to-gitops-with-gitlab/) for the next tutorial._\n\n\n\n",[539,9,1288],{"slug":1480,"featured":6,"template":688},"gitops-with-gitlab-infrastructure-provisioning","content:en-us:blog:gitops-with-gitlab-infrastructure-provisioning.yml","Gitops With Gitlab Infrastructure Provisioning","en-us/blog/gitops-with-gitlab-infrastructure-provisioning.yml","en-us/blog/gitops-with-gitlab-infrastructure-provisioning",{"_path":1486,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1487,"content":1492,"config":1498,"_id":1500,"_type":13,"title":1501,"_source":15,"_file":1502,"_stem":1503,"_extension":18},"/en-us/blog/gitops-with-gitlab-manage-the-agent",{"title":1488,"description":1489,"ogTitle":1488,"ogDescription":1489,"noIndex":6,"ogImage":1240,"ogUrl":1490,"ogSiteName":675,"ogType":676,"canonicalUrls":1490,"schema":1491},"Self-managing Kubernetes agent installation with GitOps","This is the eighth and last article in a series of tutorials on how to do GitOps with GitLab.","https://about.gitlab.com/blog/gitops-with-gitlab-manage-the-agent","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitOps with GitLab: Turn a GitLab agent for Kubernetes installation to manage itself\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2022-03-30\",\n      }",{"title":1493,"description":1489,"authors":1494,"heroImage":1240,"date":1495,"body":1496,"category":683,"tags":1497},"GitOps with GitLab: Turn a GitLab agent for Kubernetes installation to manage itself",[765],"2022-03-30","\n\n_It is possible to use GitLab as a best-in-class GitOps tool, and this blog post series is going to show you how. These easy-to-follow tutorials will focus on different user problems, including provisioning, managing a base infrastructure, and deploying various third-party or custom applications on top of them. You can find the entire \"Ultimate guide to GitOps with GitLab\" tutorial series [here](/blog/the-ultimate-guide-to-gitops-with-gitlab/)._\n\nIn this article, we will build upon the first few articles, and will turn a GitLab agent for Kubernetes installation to manage itself. This is highly recommended for production usage as it puts your `agentk` deployment under your GitOps project, and enables flawless and simple upgrades.\n\n## Prerequisites\n\nThis article builds on a few previous articles from this series and makes the following assumptions:\n\n- You have [an agent connection set up using the `kpt` based method](/blog/gitops-with-gitlab-connecting-the-cluster/).\n- You have [set up Bitnami's Sealed secrets](/blog/gitops-with-gitlab-secrets-management/).\n- You understand [how to use `kustomize` with the agent](/blog/gitops-with-gitlab/).\n\n## The goal\n\nThe goal of this tutorial is to manage a GitLab agent for Kubernetes deployment using that given agent. This has several benefits, including: \n\n- By turning the agent to manage itself, the agent configuration and deployment is managed in code. As a result, all the code-oriented tools, including Merge Requests, Approvals, and branching are there to support your processes and policies.\n- Managing a fleet of agent installations in code enables simple upgrades of the deployments.\n\n### Upgrading GitLab and the GitLab agent for Kubernetes\n\nA single GitLab instance might have dozens of agent connections. How should you upgrade all these deployments in a coordinated way? Turning everything into code simplifies the upgrade process a lot.\n\nWe have the GitLab - Agent [version compatibility documented](https://docs.gitlab.com/ee/user/clusters/agent/install/index.html#upgrades-and-version-compatibility). The recommended approach is to first upgrade GitLab together with `KAS`, the GitLab-side component of the connection, and then upgrade all the `agentk` deployments. \n\nIf you manage the `agentk` deployments in code, the upgrade requires only bumping the version number in code and the `agentk` instances will take care of upgrading themselves.\n\n## Turning an agent installation to manage itself\n\nLet's do a quick recap and an overview how we wil use the tools.\n\nWe use `kpt` to check out tagged `agentk` deployment manifests. As the manifests are a set of `kustomize` layers, we can extend them with our own overlays if needed, or just customize the setup per our requirements. The agent connection requires a token to authenticate with GitLab. We can use Bitnami's Sealed Secrets to store an encrypted sycret in the repo.\n\nAll the above code can be put under version control safely. Moreover, we can use GitLab CI/CD to dehydrate the `kustomize` package into vanilla Kubernetes manifests that the agent can deal with.\n\nLet's see the above in action!\n\n### Kustomize layer with encrypted secret\n\nBased on the previous articles, we have the `kpt` package checked out under `packages/gitlab-agent`. We would like to store the vanilla Kubernetes manifests in the repository. We can run `kustomize build packages/gitlab-agent/cluster > kubernetes/gitlab-agent.yaml` to get the manifests, but this will include the unencrypted authentication token too.\n\nTo never output the unencrypted token, we should turn it into a sealed secret.\n\nNavigate to the `gitlab-agent` Terraform project, and create a Kubernetes secret from the token `terraform output -raw token_secret | kubectl create secret generic gitlab-agent-token -n gitlab-agent --dry-run=client --type=Opaque --from-file=token=/dev/stdin -o yaml > ../../ignored/gitlab-agent-token.yaml`. If you followed the instructions in the previous articles, the files under the `ignored` directory are never committed to `git`.\n\nWe will turn this unencrypted secret into a sealed secret. As the secret will already exist in the cluster, we should instruct the Bitnami Sealed Secret controller to pull it under its management. Moreover, as kustomize applies a random hash to every secret name, we should enable renaming the secret within the namespace. We can achieve these by adding two annotations to the unencrypted secrets object.\n\nAdd the following annotations to `ignored/gitlab-agent-token.yaml`\n\n```\nannotations:\n  sealedsecrets.bitnami.com/managed: \"true\"\n  sealedsecrets.bitnami.com/namespace-wide: \"true\"\n```\n\nNext, we should create an encrypred secret from the ignored, unencrypted one running `bin/seal-secret ignored/gitlab-agent-token.yaml > packages/gitlab-agent/sealed-secret` in the root of our project. This creates the encrypted secret under `packages/gitlab-agent/sealed-secret/SealedSecret.gitlab-agent-token.yaml`. Now, we need a kustomize layer that will use this secret instead of the original one that came with `kpt`. Let's create the following files around the encrypted secret:\n\n- Create `packages/gitlab-agent/sealed-secret/kustomization.yaml` as:\n\n```yaml\napiVersion: kustomize.config.k8s.io/v1beta1\nkind: Kustomization\nresources:\n- ../base\n- SealedSecret.gitlab-agent-token.yaml\ncomponents:\n- ../cluster/components/gitops-read-all\n- ../cluster/components/gitops-write-all\n- ../cluster/components/cilium-alert-read\nconfigurations:\n- configuration/sealed-secret-config.yaml\nsecretGenerator:\n- name: gitlab-agent-token\n  behavior: replace\n  type: Opaque\n  namespace: gitlab-agent\n  options:\n    annotations:\n      sealedsecrets.bitnami.com/managed: \"true\"\n      sealedsecrets.bitnami.com/namespace-wide: \"true\"\n```\n\n- Create `packages/gitlab-agent/sealed-secret/configuration/sealed-secret-config.yaml` as:\n\n```yaml\nnameReference:\n- kind: Secret\n  fieldSpecs:\n  - kind: SealedSecret\n    path: metadata/name\n  - kind: SealedSecret\n    path: spec/template/metadata/name\n```\n\nThis configuration enables us to reference the name of the Sealed Secret in the `secretGenerator`.\n\nWe created a new `kustomize` overlay that builds on the `base` and `cluster` layers, but will use the sealed secret. We can hydrate this into vanilla manifests using `kustomize build packages/gitlab-agent/sealed-secret > kubernetes/gitlab-agent.yaml`. This configuration does not include any unencrypted, sensitive data. As a result, we can commit it freely using `git commit`.\n\n### Adopt the agent by the agent\n\nRight now the agent configuration file looks similar to: \n\n```yaml\ngitops:\n  # Manifest projects are watched by the agent. Whenever a project changes,\n  # GitLab deploys the changes using the agent.\n  manifest_projects:\n  - id: path/to/your/project\n    default_namespace: gitlab-agent\n    # Paths inside of the repository to scan for manifest files.\n    # Directories with names starting with a dot are ignored.\n    paths:\n    - glob: 'kubernetes/test_config.yaml'\n    - glob: 'kubernetes/**/*.yaml'\n```\n\nIf we would push the previously hydrated manifests, `agentk` would fail applying them complaining about missing inventories. We can easily fix this by temporarily setting a looser inventory policy:\n\n```yaml\ngitops:\n  # Manifest projects are watched by the agent. Whenever a project changes,\n  # GitLab deploys the changes using the agent.\n  manifest_projects:\n  - id: path/to/your/project\n    default_namespace: gitlab-agent\n    inventory_policy: adopt_all\n    # Paths inside of the repository to scan for manifest files.\n    # Directories with names starting with a dot are ignored.\n    paths:\n    - glob: 'kubernetes/test_config.yaml'\n    - glob: 'kubernetes/**/*.yaml'\n```\n\nWith the inventory policy configured, we can commit and push our changes to GitLab. The agent will see the new configuration and resources, and will apply them into the cluster. From now on, you can change the code in the repository, push it to git, and the changes will be automatically applied into your cluster.\n\n#### What are inventory policies?\n\nThe GitLab agent for Kubernetes knows about the managed resources using so-called inventory objects. In technical terms, an inventory object is just a `ConfigMap` with a unique label. Whenever the agent sees an object that it should manage, it applies the same label. This way, every agent can easily find the resources that it manages.\n\nYou can read more about the possible [inventory policy configurations in the documentation](https://docs.gitlab.com/ee/user/infrastructure/clusters/deploy/inventory_object.html).\n\n\n#### A word about RBAC\n\nDepending on the authorization rights given to the `agentk` deployment, not every change might be possible. For example, if you would like to create new `ClusterRole` and `ClusterRoleBinding` in a new `kustomize` overlay, and apply that with the Agent, that might fail. It will fail, if your current role-based access control (RBAC) does not allow your `agentk` deployment to create these resources. In this case, you should either provide higher rights to your `agentk` service account first or you should apply the changes manually from your command line.\n\n### Automatic hydration\n\nNow, if you want to change something in your agent deployment, you need to take two actions:\n\n- change the code in the `kpt` package\n- run `kustomize build` to hydrate the results\n\nLet's automate the second step so you can focus on your main job only. Following the setup of [a GitOps-style Auto DevOps pipeline](/blog/gitops-with-gitlab/#hydrating-the-manifests), we need to extend the `hydrate-packages` job:\n\n\n```yaml\nhydrate-packages:\n      ...\n      script:\n      - mkdir -p new_manifests\n      ...\n      - kustomize build packages/gitlab-agent/sealed-secret > new_manifests/gitlab-agent.yaml\n```\n\nWe can re-use all the other automation as presented in the previous articles.\n\n## How to upgrade `agentk`?\n\nJust to provide a practical example, let's see how we can use the above setup to easily upgrade an `agentk` deployment to a newer version.\n\nBy running `kustomize cfg set packages/gitlab-agent agent-version v14.9.1` we set the intended `agentk` version to be version `14.9.1`. You can commit and push this change to git, and lay back in your chair to see how the changes are being rolled out across your clusters. You can point several agent configurations at the same `kubernetes/gitlab-agent.yaml` manifest, and upgrade all of them at once.\n\n## Recap\n\nIn this article we have seen:\n\n- how to turn an Agent deployment to manage itself\n- how to extend the default `kpt` project with a custom `kustomize` overlay to customize the `agentk` deployment\n- how to easily upgrade a set of `agentk` deployments\n- how to pull already existing objects to be managed by the Agent using inventory policies\n\n_Note: This is the final installment in this series of [how to do GitOps with GitLab](/blog/the-ultimate-guide-to-gitops-with-gitlab)._\n\n\n",[539,9,748],{"slug":1499,"featured":6,"template":688},"gitops-with-gitlab-manage-the-agent","content:en-us:blog:gitops-with-gitlab-manage-the-agent.yml","Gitops With Gitlab Manage The Agent","en-us/blog/gitops-with-gitlab-manage-the-agent.yml","en-us/blog/gitops-with-gitlab-manage-the-agent",{"_path":1505,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1506,"content":1511,"config":1516,"_id":1518,"_type":13,"title":1519,"_source":15,"_file":1520,"_stem":1521,"_extension":18},"/en-us/blog/gitops-with-gitlab-secrets-management",{"title":1507,"description":1508,"ogTitle":1507,"ogDescription":1508,"noIndex":6,"ogImage":1338,"ogUrl":1509,"ogSiteName":675,"ogType":676,"canonicalUrls":1509,"schema":1510},"GitOps with GitLab: How to tackle secrets management","In part four of our GitOps series, we learn how to manage secrets with the GitLab Agent for Kubernetes.","https://about.gitlab.com/blog/gitops-with-gitlab-secrets-management","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitOps with GitLab: How to tackle secrets management\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2021-12-02\",\n      }",{"title":1507,"description":1508,"authors":1512,"heroImage":1338,"date":1513,"body":1514,"category":683,"tags":1515},[765],"2021-12-02","\n\n_It is possible to use GitLab as a best-in-class GitOps tool, and this blog post series is going to show you how. These easy-to-follow tutorials will focus on different user problems, including provisioning, managing a base infrastructure, and deploying various third-party or custom applications on top of them. You can also view our entire [\"Ultimate guide to GitOps with GitLab\"](/blog/the-ultimate-guide-to-gitops-with-gitlab/) tutorial series._\n\nIn this article we will use our cluster connection to manage secrets within our cluster.\n\n## Prerequisites\n\nThis article assumes that you have a Kubernetes cluster connected to GitLab using the GitLab Agent for Kubernetes. If you don't have such a cluster, I recommend looking at the linked articles above so you have a similar setup from where we will start today.\n\n## A few words about secrets management\n\nThe Kubernetes `Secret` resource is a rather tricky one! By design, secrets should have limited access and should be encrypted at rest and in transit. Still, by default, Kubernetes does not encrypt secrets at rest and accessing them might not be restricted in your cluster. We will not go into detail about how to secure your cluster with respect to secrets in this article. Instead, we want to focus on getting some secrets configured in your cluster with a GitOps approach.\n\nManaging secrets with GitOps means you store those secrets within your Git repository. Of course, you should never store unencrypted secrets in a repo, and some security people are even reluctant to store encrypted secrets in Git. We will not be that worried, but you should consider if this is an acceptable risk for you. There is an alternative we'll talk about, below, if you prefer to not manage your secrets in Git.\n\nThere are a few benefits of Git-based secrets management:\n\n- you get versioning by default\n- collaboration is supported using merge requests\n- as secrets are in code, you push responsibilities towards the development team\n- the tools used are well-known to developers\n\n## Secrets management with GitLab\n\nWhen it comes to secrets, Kubernetes, and GitLab, there are at least 3 options to choose from:\n\n- create secrets automatically from environment variables in GitLab CI\n- manage secrets through HashiCorp Vault and GitLab CI\n- manage secrets in git with a GitOps approach\n\n### Create secrets automatically from environment variables in GitLab CI\n\nThe Auto Deploy template applies every [`K8S_SECRET_` prefixed environment variable](https://docs.gitlab.com/ee/topics/autodevops/customize.html#application-secret-variables) into your cluster as a Kubernetes Secret. Later, your applications can reference these secrets. This approach is the simplest to use, especially if you would like to use [Auto DevOps](/topics/devops/). We will look into it in a future article.\n\nWhile simple to use, with this approach your secrets are stored in the GitLab database, instead of `Git`. That means you lose versioning of the secrets, you need `Maintainer` rights to modify these secrets, and you lose the ability to approve a change of secret in a merge request.\n\n### Manage secrets through HashiCorp Vault and GitLab CI\n\n[GitLab CI/CD integrates with HashiCorp Vault](https://docs.gitlab.com/ee/ci/examples/authenticating-with-hashicorp-vault/#authenticating-and-reading-secrets-with-hashicorp-vault) to support advanced secrets management use cases. You can combine the `K8S_SECRET_` prefixed use case even with Vault-based secrets, and have the secrets applied automatically. \n\nWith this approach, you get the all the benefits of HashiCorp Vault, but there is a question: why do you move secrets from Vault to GitLab just to move them to your cluster instead of retrieving the secrets directly from within your cluster? We recommend leaving GitLab out of this flow if you don't have a really good reason to provide secret access to GitLab too! Vault has really great Kubernetes support, thus retrieving secrets directly should be feasible.\n\n### Manage secrets in Git with a GitOps approach\n\nTo manage secrets in Git, we will need some kind of tooling to take care of the encryption/decryption of the secrets. In this article, I will show you how to set up and use [Bitnami's Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets), but you can try other tools, like [SOPS](https://github.com/mozilla/sops) too. We will look into Bitnami's approach as it targets Kubernetes exclusively, unlike SOPS that supports other use cases too, and might need a bit more setup for Kubernetes.\n\nBitnami's Sealed Secrets is composed of an in-cluster controller and a CLI tool. The cluster component defines a `SealedSecret` custom resource that stores the encrypted secret and related metadata. Once a `SealedSecret` is deployed into the cluster, the controller decrypts it and creates a native Kubernetes `Secret` resource from it. To create a `SealedSecret` resource, the `kubeseal` utility can be used. `kubeseal` can take a public key and transform and encrypt a native Kubernetes `Secret` into a `SealedSecret`, and `kubeseal` can help with retrieving the public key from the cluster-side controller too.\n\n## Setting up Bitnami's Sealed Secrets\n\nAs the GitLab Agent supports pure Kubernetes manifests to do GitOps, we will need the manifests for Sealed Secrets. Open the [Sealed Secrets releases page](https://github.com/bitnami-labs/sealed-secrets/releases/) and find the most recent release (Don't be fooled by the `helm` releases!). At the time of writing this article, the most recent [release is v0.16.0](https://github.com/bitnami-labs/sealed-secrets/releases/tag/v0.16.0). From there you can download the release `yaml`, if your cluster supports RBAC, I recommend the basic `controller.yaml` file.\n\n- Save and commit the `controller.yaml` under `kubernetes/sealed-secrets.yaml`\n\nPush the changes and wait a few seconds for them to get applied. Check that they got applied successfully using: `kubectl get pods -n kube-system -l name=sealed-secrets-controller`\n\n## Retrieving the public key\n\nWhile the user can encrypt a secret directly with `kubeseal`, this approach requires them to have access to the Kube API. Instead of providing access, we can fetch the public key from the Sealed Secrets controller and store it in the Git repo. The public key can be used to encrypt secrets, but is useless for decrypting them.\n\n```bash\nkubeseal --fetch-cert > sealed-secrets.pub.pem\n```\n\n### How to avoid storing unencrypted secrets\n\nI prefer to have an `ignored` directory within my Git repo. The content of this directory is never committed to Git, and I put every sensitive data under this directory.\n\n```bash\nmkdir ignored\ncat \u003C\u003CEOF > ignored/.gitignore\n*\n!.gitignore\nEOF\n```\n\n## Continue with setup - not needed if we use a box\n\nNow, you can create sealed secrets with the following two commands:\n\n```bash\necho \"Very secret\" | kubectl create secret generic my-secret -n gitlab-agent --dry-run=client --type=Opaque --from-file=token=/dev/stdin -o yaml > ignored/my-secret.yaml\nkubeseal --format=yaml --cert=sealed-secrets.pub.pem \u003C ignored/my-secret.yaml > kubernetes/\n```\n\nThe first command creates a regular Kubernetes `Secret` resource in the `gitlab-agent` namespace. Setting the namespace is important if you use Sealed Secrets and every SealedSecret is scoped for a specific namespace. You can read more about this in the Sealed Secrets documentation.\n\nThe second command takes a `Secret` resource object and turns it into an encrypted `SealedSecret` resource. In my case, the secret file:\n\n```yaml\napiVersion: v1\ndata:\n  token: VmVyeSBzZWNyZXQK\nkind: Secret\nmetadata:\n  creationTimestamp: null\n  name: my-secret\n  namespace: gitlab-agent\ntype: Opaque\n```\n\ngot turned into:\n\n```yaml\napiVersion: bitnami.com/v1alpha1\nkind: SealedSecret\nmetadata:\n  creationTimestamp: null\n  name: my-secret\n  namespace: gitlab-agent\nspec:\n  encryptedData:\n    token: AgC1m/D1UwliKD3C2QSv/g+zBi1qGz1YTLZfqnl5JJ4NydCatKzsp8LZr2stIlkwcS3f2YAo/ZIq1OUhOgSgkuNMwVdqsBx1zq7Z3xpGLMIMe7B3XhQ+ExWwqgrm1dTiTDHaH9eXsZWaNsruKQU0F8oGxgLfO/axEZeGWd4WngZRaed9B43dy2k05B6fZnxmwtUVSpr86MO52fX06/QdbvB8MZTrYb7qFuL14U0IDvdFl4l8sPl2rrXsriKg0fJHIV6XtlCwPpQGozTZTUX8nbvU0yXothBzPbaIUfXseFqaW8i/i0Ai+aKhWQAjPGooVAXGwKsuve16DxZ6GJPp1ymR1cEsBkEPlYKbVCKtH5VuptCYZuTXMM6OEPzjFabaIMIUVkkciHlUMcpKFfPnpf7XbBNqZCAKjt//9L99gc48dJRyO4pCrcpFnv6287d65UGnWjmcUJNQNBhEuh9k4esfEZuBNiYIz3Ouz7Wg5HQoT6v3i3J1X5LluWEcTK1G10T7UN+QrnklH4yUtx35yLp83B5/TGICo0Yq1QnARNbKhL5EXuwAO427XO65zzJ3Lh2ymUfrBY3bHO8NW4ykO7ZNDRdj/fsge1J8k4yaxeimQapDKs4XMhoNnKqUNPQYaiQzNPRoj9JwMvtvOH+WLJqEXHIc8RooWGkdo/SB7zp3q7OuHk6HRJM+AQVP3t0r3A1bVhHonUGlv1ApduM=\n  template:\n    metadata:\n      creationTimestamp: null\n      name: my-secret\n      namespace: gitlab-agent\n    type: Opaque\n```\n\nJust commit the `SealedSecret` and quickly start to watch for the event stream using `kubectl get events --all-namespaces --watch` to see when the sealed secret is unsealed and applied as a regular `Secret`.\n\n## Utility scripts\n\nIf you found the `kubeseal` command above to be quite complex, you can wrap it in a script.\n\n- Create `bin/seal-secret.sh` with the following content:\n\n```bash\n#!/bin/sh\n\nif [ $# -ne 2 ]\n  then\n    echo \"Usage: $0 ignored/my-secret.yaml output-dir/\"\n    echo \"This script requires two arguments\"\n    echo \"The first argument should be the unsealed secret\"\n    echo \"The second argument should be the directory to output the sealed secret\"\n  exit 1\nfi\n\n\nSECRET_FILE=$(basename $1)\n\nkubeseal --format=yaml --cert=sealed-secrets.pub.pem \u003C $1 > \"$2/SealedSecret.${SECRET_FILE}\"\n\necho \"Created file $2/SealedSecret.${SECRET_FILE}\"\n```\n\nThis script takes a path to a vanilla Kubernetes secret and an output directory, and tranforms your `Secret` into a `SealedSecret`.\n\n## Winding it up\n\nIn this article, we have seen how you can install Bitnami's Sealed Secret into your cluster and set it up for static secrets management. Please note the installation method provided here works for all the other 3rd party, off-the-shelf applications that can be deployed using Kubernetes manifests only.\n\n## What is next?\n\nIn the next article, we will see how you can access a Kubernetes cluster using GitLab CI/CD and why you might want to do it even if you aim for GitOps.\n\n_[Click here](/blog/the-ultimate-guide-to-gitops-with-gitlab/) for the next tutorial._\n\n\n",[539,9,1288],{"slug":1517,"featured":6,"template":688},"gitops-with-gitlab-secrets-management","content:en-us:blog:gitops-with-gitlab-secrets-management.yml","Gitops With Gitlab Secrets Management","en-us/blog/gitops-with-gitlab-secrets-management.yml","en-us/blog/gitops-with-gitlab-secrets-management",{"_path":1523,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1524,"content":1530,"config":1535,"_id":1537,"_type":13,"title":1538,"_source":15,"_file":1539,"_stem":1540,"_extension":18},"/en-us/blog/gitops-with-gitlab-using-ci-cd",{"title":1525,"description":1526,"ogTitle":1525,"ogDescription":1526,"noIndex":6,"ogImage":1527,"ogUrl":1528,"ogSiteName":675,"ogType":676,"canonicalUrls":1528,"schema":1529},"GitOps with GitLab: The CI/CD Tunnel","This is the fifth in a series of tutorials on how to do GitOps with GitLab.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749667236/Blog/Hero%20Images/Learn-at-GL.jpg","https://about.gitlab.com/blog/gitops-with-gitlab-using-ci-cd","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitOps with GitLab: The CI/CD Tunnel\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2022-01-07\",\n      }",{"title":1525,"description":1526,"authors":1531,"heroImage":1527,"date":1532,"body":1533,"category":683,"tags":1534},[765],"2022-01-07","\n\n_It is possible to use GitLab as a best-in-class GitOps tool, and this blog post series is going to show you how. These easy-to-follow tutorials will focus on different user problems, including provisioning, managing a base infrastructure, and deploying various third-party or custom applications on top of them. You can find the entire \"Ultimate guide to GitOps with GitLab\" tutorial series [here](/blog/the-ultimate-guide-to-gitops-with-gitlab/)._\n\nIn this article, we will see how you can access a Kubernetes cluster using GitLab CI/CD and why you might want to do it even if you aim for [GitOps](/topics/gitops/).\n\n## Prerequisites\n\nThis post assumes that you have a Kubernetes cluster connected to GitLab using the GitLab Kubernetes Agent. If you don't have such a cluster, I recommend consulting the previous posts (linked above) to have a similar setup from where we will start today.\n\n## Meet the CI/CD Tunnel\n\nThe GitLab Kubernetes Agent is not just a GitOps tool that will enable pull-based deployments and be one more application to maintain beside the other 70 in your DevOps stack. The GitLab Kubernetes Agent aims to serve the GitLab vision of providing you a single application for the whole DevSecOps lifecycle. As a result, the Agent's goal is to provide an integrated experience with every relevant GitLab feature.\n\nWhat GitLab features does the Agent integrate with today?\n\n- GitLab CI/CD\n- Container network security\n- Container host security\n- Container scanning\n\nIn this post, we will focus on the GitLab CI/CD integration. Given the power and flexibility of GitLab CI/CD, the majority of our users have been using it for years successfully and, until the Agent appeared, they often had to manually script their cluster connections and deployments into it. If the previous setup sounds familiar, I recommend checking out the Agent's CI/CD integration features, the CI/CD tunnel. The CI/CD tunnel enables a cluster connection to be used from GitLab CI/CD, thus you need only minor adjustments to your existing setup, and will receive a GitLab supported component that we are continuously expanding to provide more and more integrations on top of it.\n\nThe CI/CD tunnel is always enabled in the project where you register and configure the Agent, and the given connection can be shared by other groups and projects, too. This way, a single connection can be reused throughout the organization to save on resource and maintenance costs.\n\nGitLab automatically injects the available Kubernetes contexts into the CI/CD runner environment's `KUBECONFIG`. As a result, you can activate a context and start using it without much setup.\n\n## How to configure the CI/CD tunnel\n\nAs already mentioned, the CI/CD tunnel is always enabled in the project where you register and configure the Agent. If you would like to use the tunnel in the same repository, no configuration is needed. If you would like to share the connection with other repositories, open your agent configuration file and add the following lines:\n\n```yaml\nci_access:\n   projects:\n   - id: path/to/project\n   groups:\n   - id: path/to/group\n```\n\nChange the placeholder paths here to your project or group path. Sharing a connection with a group enables access to all the projects within that group. Once you save the configuration file, you can turn your attention to your application project repository, and use the following job to list and select an agent:\n\n```yaml\ndeploy:\n   image:\n     name: bitnami/kubectl:latest\n     entrypoint: [\"\"]\n   script:\n   - kubectl config get-contexts \n   - kubectl config use-context path/to/agent-configuration-project:your-agent-name\n```\n\n## How to install GitLab integrated applications into your cluster\n\nAs an application of the above, let's install some applications into the cluster. As various GitLab features require applications in your cluster to be installed and configured for GitLab, Gitlab provides a cluster management project template to help you get started. One can easily install these GitLab integrated applications into their clusters using this template. Let's see how to use it with the CI/CD tunnel and the Agent!\n\n### Create the cluster management project\n\nFirst, let's create a new GitLab project using the \"Cluster Management Project\" template. Open the [create new project from template page](https://gitlab.com/projects/new#create_from_template), search for \"GitLab Cluster Management\", and start a new project with that template.\n\nYou will receive a project that already contains quite a lot of things! It comes with a ready-made `.gitlab-ci.yml` file and [helmfile](https://github.com/roboll/helmfile) based setup for 11 applications that integrate with various GitLab functionalities. [Each application might require different configurations](https://docs.gitlab.com/ee/user/clusters/management_project_template.html#built-in-applications). You can read about these in the linked documentation.\n\nAs part of this article, we will install NGINX Ingress and GitLab Runners using the cluster management project.\n\n### How to share the CI/CD tunnel\n\nThis newly created project needs access to one of your clusters. Let's share an Agent's connection with this project as described above. Edit your agent configuration file and add:\n\n```yaml\nci_access:\n   projects:\n   - id: path/to/your/cluster/management/project\n```\n\n### Pick the right Kubernetes context\n\nThe CI/CD tunnel is already available from within your cluster management project. We tried to make it simple to start using a cluster connection without the need to edit `.gitlab-ci.yml`. For simple setups, you can just set a `KUBE_CONTEXT` environment variable with the path to and name of your agent.\n\nSet an environment variable under \"Settings\" / \"CI/CD\" / \"Variables\"\n\n![KUBE_CONTEXT variable setup](https://about.gitlab.com/images/blogimages/2022-01-07-gitops-with-gitlab-using-ci-cd/KUBE_CONTEXT_setting.png)\n\n### How to install NGINX Ingress\n\nWe are ready to install any of the supported applications using this agent connection! Let's start by installing NGINX Ingress as it does not require any application-specific configuration.\n\nIn your cluster management project, edit `helmfile.yaml` and uncomment the line that points to the `ingress` application. Commit the changes and wait for GitLab magic to happen!\n\nThis was really easy!\n\n### How to install GitLab Runner\n\nAs GitLab Runner is more integrated with GitLab, it needs a little bit of configuration. [The Runner should know](https://docs.gitlab.com/ee/user/infrastructure/clusters/manage/management_project_applications/runner.html#required-variables) where it can find your GitLab instance and needs a token to authenticate with GitLab.\n\nTo make it simple for you to install a Runner fleet, you can configure these as environment variables. By default the `CI_SERVER_URL` variable is used to specify the GitLab url. You can overwrite this if needed. For the token, you should create `GITLAB_RUNNER_REGISTRATION_TOKEN` as a masked and protected environment variable with the value of your Runner registration token. Feel free to use either a project or a group registration token.\n\nFinally, as with the Ingress installation, uncomment the related line in the `helmfile.yaml`.\n\n## The full potential of the cluster management project\n\nThe cluster management project you created is yours. Thus, you are free to change it, extend it, or get rid of it. In this section, I would like to share with you a few ideas of how you might benefit the most from it.\n\n### Did you move away from Helm v2 already?\n\nThe `.gitlab-ci.yml` file in the cluster management project has a job that supports users to upgrade their Helm v2 installations to v3. If you never had these applications installed through a cluster management project with Helm v2, then you don't need that job. Feel free to delete it from your CI yaml.\n\n### Extend the project with your own apps\n\nThe cluster management project is self-contained as is. You can add your own helm/helmfile based application setups to it. To get started, I recommend to check out the [helmfile](https://github.com/roboll/helmfile) README.\n\n### Stay up to date\n\nWe want you to own the cluster management project, so you can upgrade the applications independently of GitLab releases. Still, you might prefer to follow GitLab releases, too, as you can expect improvements to the cluster management project template. How can you do that?\n\nIf you followed the `kpt` based Agent installation setup, you know that `kpt` can check out a git subtree and merge local changes with upstream changes when you request an update. You can use `kpt` here, too! \n\nAs you manage the cluster management project, you can replace selected applications with their `kpt` checkouts. For example, you can start following the upstream template with:\n\n```bash\ncd applicatioins\nrm -rf prometheus\nkpt pkg get https://gitlab.com/gitlab-org/project-templates/cluster-management.git/applications/prometheus prometheus\n```\n\nand update to the most recent version by running:\n\n```bash\nkpt pkg update applications/prometheus\n```\n\n## Recap\n\nAs we have seen in this article, the GitLab Kubernetes Agent provides way more possibilities than focused GitOps tools do. Besides supporting pull-based deployments, we support GitLab users with integrating into their existing CI/CD based workflows. Moreover, a Cluster Management Project template ships with GitLab that supplements the various GitLab integrations to simplify getting started with them.\n\n## What's next\n\nBuilding on our knowledge of the CI/CD tunnel, in the next article we will look into how to use Auto DevOps with the Agent.\n\n_[Click here](/blog/the-ultimate-guide-to-gitops-with-gitlab/) for the next tutorial._\n\n\n\n\n\n",[9,685,1248],{"slug":1536,"featured":6,"template":688},"gitops-with-gitlab-using-ci-cd","content:en-us:blog:gitops-with-gitlab-using-ci-cd.yml","Gitops With Gitlab Using Ci Cd","en-us/blog/gitops-with-gitlab-using-ci-cd.yml","en-us/blog/gitops-with-gitlab-using-ci-cd",{"_path":1542,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1543,"content":1548,"config":1554,"_id":1556,"_type":13,"title":1557,"_source":15,"_file":1558,"_stem":1559,"_extension":18},"/en-us/blog/gitops-with-gitlab",{"title":1544,"description":1545,"ogTitle":1544,"ogDescription":1545,"noIndex":6,"ogImage":1338,"ogUrl":1546,"ogSiteName":675,"ogType":676,"canonicalUrls":1546,"schema":1547},"GitOps delivery by connecting Kubernetes clusters to GitLab","This is the first in a seven-part series on GitOps using GitLab's DevOps Platform.","https://about.gitlab.com/blog/gitops-with-gitlab","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Here's how to do GitOps with GitLab\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2021-10-21\",\n      }",{"title":1549,"description":1545,"authors":1550,"heroImage":1338,"date":1551,"body":1552,"category":683,"tags":1553},"Here's how to do GitOps with GitLab",[765],"2021-10-21","\n\n_It is possible to use GitLab as a best-in-class GitOps tool, and this blog post series is going to show you how. These easy-to-follow tutorials will focus on different user problems, including provisioning, managing a base infrastructure, and deploying various third-party or custom applications on top of them. You can find the entire \"Ultimate guide to GitOps with GitLab\" tutorial series [here](/blog/the-ultimate-guide-to-gitops-with-gitlab/)._\n\nThis post provides an overview of the series, and will provide a bit of context around GitOps, [Infrastructure as Code](/topics/gitops/infrastructure-as-code/), and related notions.\n\n## Start with the buzzwords\n\nThe DevOps industry is changing at a very fast pace, and there are plenty of new ideas popping up around this transformation. What are these? Let’s look into the following concepts and why they matter: DevOps, site reliability engineers (SRE), GitOps, Infrastructure as Code, and containers.\n\nThe term DevOps was coined by Patrick Debois in 2009. DevOps is a cultural approach, not a technology or a set of processes. At its core there are a few principles such as continuous learning, fast feedback loops and a clear flow of work. There is a strong connection between DevOps and SRE, as one can think of the SRE approach as a well-defined implementation of DevOps. Two important aspects of the SRE approach are codified infrastructure management and metrics. These enable the level of automation needed for feedback, and their central metrics (SLIs) are being moved to the left down to development teams too.\n\nWith the emergence of cloud computing, infrastructure can be managed fully through APIs. This gave rise to Infrastructure as Code or IaC. IaC means infrastructure engineers almost never have to click through a provider’s UI to configure a new user or a resource. IaC approaches can be used to configure GitLab itself or to allow GitLab to configure a 3rd party system (such as creating a cluster or managing databases).\n\n[GitOps](/topics/gitops/) is the new kid on the block here, and it basically summarizes the current state of our industry. IaC projects likely store their code in version-controlled ways, probably in git. They might even be automated through pipelines, and the resulting infrastructure might have good observability built into the whole stack. So, what does GitOps bring to the table? It brings us two things. First, GitOps wants to avoid drift using a reconciliation loop that automatically “fixes” the infrastructure if it deviates from the codified state found in the IaC repository. Whether this is feasible and how this is done is still a debated question. At the same time, the rise of declarative infrastructure popularized by Kubernetes makes this a compelling approach to many. The second benefit of GitOps is the \"declarative\" ability. By being declarative, the desired state of the infrastructure is described in the git repo. This simplifies complexity in provisioning as the end-system is tasked by setting up the described infrastructure. Contrast this with an imperative setup where the administrators have to codify the exact steps of setting up the infrastructure.\n\nContainers are mentioned here for a single reason: Once we get to deployments, I am going to focus on containerized applications only. Containers have already proved to be a great layer of abstraction for application delivery.\n\nYou can [read more about the evolution of DevOps](/blog/gitops-as-the-evolution-of-operations/) and how we got to GitOps as part of this evolution.\n\n## The series overview\n\n**Infrastructure provisioning with GitLab and Terraform**: My next post in the series will outline how to use GitLab to provision infrastructure. In this post I will use a GitLab project to create an EKS cluster following IaC best practices. To do this I will use Terraform, as Terraform is considered to be the de facto standard in infrastructure provisioning, and GitLab has strong built-in support for it.\n\n**Connecting GitLab with a Kubernetes cluster - Quickstart**: This post will show how one can quickly connect a cluster with GitLab using our recommended way, the GitLab Agent for Kubernetes. As this is a quickstart, this approach does not use all the GitLab IaC recommendations. Nevertheless it is a great start that we can build upon later. This post will outline the different approaches for connecting a cluster to GitLab, including our recommended approach.\n\n**Secrets management with GitLab**: In the third post, I will deploy a simple “secrets as code” solution into our cluster and set it up for future use. This will demonstrate how third-party services can easily be deployed and managed with GitLab. Moreover, this specific tool will be used in the subsequent post where we migrate from the quickStart cluster connection to a self-managing, IaC connection.\n\n**Managing the cluster connection from code**: In the second post, we created a GitLab-connected cluster, but there we either need to manage the cluster from our local CLI or need to do some CI magic. Now I will demonstrate how to build out a more robust management for the cluster connection. We set up the cluster connection to manage itself using a pull-based approach.\n\n**Integrate the cluster into GitLab**: As GitLab is not just an SCM and CI tool, but the complete DevOps Platform, it has robust monitoring and security integrations with Kubernetes. In this post I am going to show how one can use the GitLab-provided cluster management application on top of our cluster connection, and install NGINX, Cilium, and custom runners with minimal effort, in an IaC style.\n\n**Application deployment with Auto DevOps**: The final post in the series will illustrate how business applications can be easily deployed into the cluster. I will focus on push-based deployments as many development teams might be familiar with pipelines, unlike the most recent pull-based approaches. At the same time, given the content from the previous posts, it should be possible to put together a pull-based deployment as top of Auto DevOps as well.\n\n_[Click here](/blog/the-ultimate-guide-to-gitops-with-gitlab/) for the next tutorial._\n\n\n",[539,685,9],{"slug":1555,"featured":6,"template":688},"gitops-with-gitlab","content:en-us:blog:gitops-with-gitlab.yml","Gitops With Gitlab","en-us/blog/gitops-with-gitlab.yml","en-us/blog/gitops-with-gitlab",{"_path":1561,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1562,"content":1567,"config":1572,"_id":1574,"_type":13,"title":1575,"_source":15,"_file":1576,"_stem":1577,"_extension":18},"/en-us/blog/gke-gitlab-integration",{"title":1563,"description":1564,"ogTitle":1563,"ogDescription":1564,"noIndex":6,"ogImage":1140,"ogUrl":1565,"ogSiteName":675,"ogType":676,"canonicalUrls":1565,"schema":1566},"GitLab + Google Cloud Platform = simplified, scalable deployment","We’ve teamed up with Google Cloud Platform – here’s what that means for you.","https://about.gitlab.com/blog/gke-gitlab-integration","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab + Google Cloud Platform = simplified, scalable deployment\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Rebecca Dodd\"}],\n        \"datePublished\": \"2018-04-05\",\n      }",{"title":1563,"description":1564,"authors":1568,"heroImage":1140,"date":1569,"body":1570,"category":300,"tags":1571},[940],"2018-04-05","\n\nGet super-simple deployment for your app with GitLab and Google Cloud Platform (GCP): thanks to our integration with Google Kubernetes Engine (GKE), you can now get CI/CD and Kubernetes deployment set up with just a few clicks, and [$500 credit](#get-seamless-integration-with-gke-and-500-credit-for-your-project) to get you started.\n\n## Now everyone can get automatic code quality, security testing, and no-configuration deployment\n\nWith increasing adoption of [cloud native](/topics/cloud-native/) practices, the use of [microservices](/topics/microservices/) and containers has become critical to modern software development. Kubernetes has emerged as the first choice for container orchestration, allowing apps to scale elastically from a couple of users to millions. It's been possible to deploy to Kubernetes from GitLab for quite a while, but the process of setting up and managing everything was manual and time intensive.\n\nToday, we’re happy to announce we've been collaborating with Google to make Kubernetes easy to set up on GitLab. Now, with our native [Google Kubernetes Engine integration](/partners/technology-partners/google-cloud-platform/), you can automatically spin up a cluster to deploy applications, with just a few clicks. Simply connect your Google account, enter a few details, and you're good to go! GitLab will create the clusters for you. The clusters are fully managed by Google and run on Google Cloud Platform's best-in-class infrastructure.\n\nThis also means you can easily take advantage of GitLab [Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/). This feature does all the hard work for you, by automatically configuring CI/CD pipelines to build, test, and deploy your application. To make use of Auto DevOps, it used to be necessary to have an in-depth understanding of Kubernetes, and you had to manage your own clusters. Not any more!\n\nWith the integration between GitLab and GKE, we’ve made it simple to set up a managed deployment environment on Google Cloud Platform and access our robust [DevOps capabilities](/topics/devops/). That’s all the benefits of fully automated code quality, security testing, and deployment, with none of the headache of managing and updating your clusters (Google does that all for you!). More than half of developers and 78 percent of managers in our [2018 Global Developer Report](/developer-survey/) agreed that automating more of the software development lifecycle is a top priority for their organization. We hope that this integration gives you a head start, by offering automation out of the box with Kubernetes and Auto DevOps.\n\n## What’s next for GitLab?\n\nWe’re not just excited about offering this integration for you to use, we’re excited to use it ourselves! We’re already in the process of migrating GitLab.com to Google Cloud Platform. For us, the primary reason to migrate was because it has the most mature Kubernetes platform. By moving, we get access to security functionality like default encrypted data at rest, a broad, ever-expanding list of localities served globally, and tight integration with our existing CDN for faster caching. Be on the lookout for more information on our migration as it progresses.\n\n## Get seamless integration with GKE and $500 credit for your project\n\nEvery new Google Cloud Platform account receives $300 in credit [upon signup](https://console.cloud.google.com/freetrial?utm_campaign=2018_cpanel&utm_source=gitlab&utm_medium=referral). In partnership with Google, GitLab is able to offer an additional $200 for new GCP accounts to get started with GitLab’s GKE integration. Here's a link to [apply for your $200 credit](https://cloud.google.com/partners/partnercredit/?pcn_code=0014M00001h35gDQAQ#contact-form).\n\n## Join Google and GitLab for a live demo\n\nOn April 26th, join Google’s [William Denniss](https://www.linkedin.com/in/williamdenniss/) and GitLab’s [William Chia](https://www.linkedin.com/in/williamchia/) for a walkthrough of the new GKE integration. You’ll learn how easy it is to set up a Kubernetes cluster, how to deploy your app using GitLab CI/CD, and how GKE enables you to deploy, update, and manage containerized applications at scale.\n\n[Register today](/webcast/scalable-app-deploy/)!\n",[1150,1149,232,9,685],{"slug":1573,"featured":6,"template":688},"gke-gitlab-integration","content:en-us:blog:gke-gitlab-integration.yml","Gke Gitlab Integration","en-us/blog/gke-gitlab-integration.yml","en-us/blog/gke-gitlab-integration",{"_path":1579,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1580,"content":1585,"config":1591,"_id":1593,"_type":13,"title":1594,"_source":15,"_file":1595,"_stem":1596,"_extension":18},"/en-us/blog/gke-webcast-recap-post",{"title":1581,"description":1582,"ogTitle":1581,"ogDescription":1582,"noIndex":6,"ogImage":1140,"ogUrl":1583,"ogSiteName":675,"ogType":676,"canonicalUrls":1583,"schema":1584},"Scalable app deployment with GitLab and Google Cloud Platform","Get the power to spin up a Kubernetes cluster managed by Google Cloud Platform in a few clicks – watch the demo of our native integration.","https://about.gitlab.com/blog/gke-webcast-recap-post","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Scalable app deployment with GitLab and Google Cloud Platform\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Suri Patel\"}],\n        \"datePublished\": \"2018-05-10\",\n      }",{"title":1581,"description":1582,"authors":1586,"heroImage":1140,"date":1588,"body":1589,"category":683,"tags":1590},[1587],"Suri Patel","2018-05-10","\n\nThe GitLab + Google Kubernetes Engine integration's versatility speeds up software development and delivery while maintaining security and scale, allowing developers to focus on building apps instead of managing infrastructure. William Chia, Senior Product Marketing Manager at GitLab, and guest speaker William Denniss, Product Manager at Google, recently met to discuss the benefits of the integration.\n\n- [What is the GitLab GKE integration?](#what-is-the-gitlab-gke-integration)\n- [What's in the webcast?](#whats-in-the-webcast)\n- [Watch the recording](#watch-the-recording)\n- [Key takeaways](#key-takeaways)\n- [Webcast Q&A](#webcast-qa)\n\n## What is the GitLab GKE integration?\n\nWith our native Google Kubernetes Engine integration, you can automatically spin up a cluster to deploy applications, with just a few clicks. Simply connect your Google account, enter a few details, and GitLab will create the clusters for you. The clusters are fully managed by Google and run on Google Cloud Platform’s best-in-class infrastructure.\n\n## What's in the webcast\n\nWilliam Chia, Senior Product Marketing Manager at GitLab, and William Denniss, Product Manager at Google, explain how to deploy applications at scale using GKE and GitLab’s robust Auto DevOps capabilities.\n\nWe start with a crash course in Kubernetes, examining containers and deployment, before taking a closer look at the [Google Kubernetes Engine integration](/partners/technology-partners/google-cloud-platform/) and seeing it in action.\n\n## Watch the recording\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/uWC2QKv15mk\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\n## Key takeaways\n\n#### A seamless collaboration\n\n>Using GitLab with GKE creates an environment in which you just need to merge your code, and GitLab does all the rest. - William Chia, GitLab Senior Product Marketing Manager\n\n#### Kubernetes for success\n\n>If you go with Kubernetes, it gives you a good start. You can hit a button and configure GKE to do it for you and scale massively when you need to. It really sets you up for success. GitLab is a really great way to get started with Kubernetes, because it sets up everything nicely for you in an automated way. - William Denniss, Google Product Manager\n\n## Webcast Q&A\n\nDuring the webcast, live participants chatted in questions to the team. Here are some of the answers that were given via chat along with several questions we didn’t get a chance to answer during the webcast.\n\n>Does Kubernetes have a built-in load balancer?\n\nIt does have support for load balancing across pods within a service. You may also need an external load balancer, in the event you have multiple nodes. Creating a [Kubernetes Service object](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster) and an [external load balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer) are great first steps.\n\n>Is it possible to deploy multiple projects in the same Kubernetes cluster?\n\nIt is, you can add the cluster manually to additional projects. We are also working to make this easier in our UI, with [support for defining clusters at the group level](https://gitlab.com/gitlab-org/gitlab-ce/issues/34758).\n\n>So coming back to the setup of a cluster. If you have a separate environment for development, test, acceptance, and production, it seems we would have multiple options, like multiple clusters, or one cluster with multiple environments. Or even one cluster, one environment and point the correct environment in the `.gitlab-ci.yml` file (environment page in GitLab). What do you recommend to use to have a nice CI/CD integration and still separate environments?\n\nWe support integrating multiple clusters into a single project, and you can define which environments should be deployed to which clusters by [using the environment scope](https://docs.gitlab.com/ee/user/project/clusters/#setting-the-environment-scope).\n\n>Is it possible to add several clusters to the same project? To isolate environments based on clusters rather than namespaces.\n\nYes, this is a feature of GitLab Premium/Silver. (Note: Open source projects on GitLab.com get all of the features of our top-tier plan for free. Public projects on GitLab.com also have this capability.)\n\n>Does GitLab support on-demand cluster creation for integration testing for QA environments?\n\nWe support the integration of multiple clusters, and you can define which cluster each environment should be deployed to. For example, you can state that all review apps should be deployed into one cluster. If you would like to dynamically create a cluster during a test, you of course can do that as well by scripting that in a job.\n\n>Are these features available on GitLab CE?\n\nCluster integration and the main Auto DevOps functionality are available in Core (CE or EE without a license). Some jobs do require Premium, and they are noted in our [Auto DevOps documentation](https://docs.gitlab.com/ee/topics/autodevops/#stages-of-auto-devops).\n\n>The test stages are paid features, right?\n\nMany test jobs are open source features available in Core, and indeed some do require an paid license. The requirements for each job are noted in our [Auto DevOps documentation](https://docs.gitlab.com/ee/topics/autodevops/#stages-of-auto-devops).\n\n>What did you mean: “You can run Enterprise Edition without a license?”\n\nGitLab Enterprise Edition uses a license key to grant you access to the features of the Starter, Premium, and Ultimate plans. If you install Enterprise Edition and don’t have a license key, then you will get access to all of the Core features.\n\n[Learn more about GitLab's tiers](/blog/gitlab-tiers/).\n\n[Learn if you should use Community Edition or Enterprise Edition](/install/ce-or-ee/).\n\n>Is there a free version of GKE for testing and learning?\n\nEvery new Google Cloud Platform account receives $300 in credit upon [signup](https://console.cloud.google.com/freetrial?utm_campaign=2018_cpanel&utm_source=gitlab&utm_medium=referral). In partnership with Google, GitLab is able to offer an additional $200 for new GCP accounts to get started with GitLab’s GKE Integration. This allows you ample usage to test and learn for free.  Visit the Google partner credit page to apply for the $200 additional credit.\n\n>I see there is a $200 credit for playing around with GitLab and GKE. Can you elaborate on that? How to receive it, etc... Is it available for personal use or for professional use only? A contact form opens that wants my professional email address.\n\nThe $200 partner credit is intended for professional use. You can apply by visiting the Google Cloud Platform [partner page](https://cloud.google.com/partners/partnercredit/?PCN=a0n60000006Vpz4AAC) and filling out the form. You'll receive an email from the Google team with a key to redeem your credit.\n\n>Will Prometheus also gather the metrics without Auto DevOps, for example our own `.gitlab-ci.yml`? Or do we need to get something from the DevOps template?\n\nWe detect common system services like the NGINX Ingress or Kubernetes CPU/Memory metrics. If you use the NGINX Ingress deployed from GitLab, it is automatically configured for exporting Prometheus metrics. Additional documentation is available in our [Prometheus documentation](https://docs.gitlab.com/ee/user/project/integrations/prometheus_library/nginx_ingress.html).\n\n>Will you also support AWS?\n\nOther providers are certainly items we are considering for future releases, but we started with GKE since we felt it has the best managed Kubernetes experience available today. Other clusters can always be added manually, with just a few extra steps.\n\n>What if GitLab is running on GKE itself, can you connect the app to the same Kubernetes cluster GitLab is running on? And how safe is it to run this auto-deployment on your existing Kubernetes clusters/cluster GitLab is running on? Looks as if you could easily waste your cluster with this.\n\nIf you’re running GitLab on GKE, you can definitely connect it to the same cluster GitLab is running on to execute your GitLab runners, and as the deployment target for Auto DevOps. I’d advise to use separate namespaces for your GitLab instance to avoid any interference.\n\nNamespaces are the key to achieving workload isolation in Kubernetes; they provide isolation between different deployments to avoid one accidentally influencing the other. If you like (and it’s a bit more configuration), you can even use RBAC to prevent any developer pipelines from ever touching production.\n\nIf you want total isolation, then create a separate GCP project, with a separate cluster for production :) This is definitely the best practice for larger deployments.\n\n>I have been playing around with the `dependency_scanning`/`sast`/`dast` jobs, but the images are not cached on the runner. Will they be cached in (near) future or do we need to add any configuration?\n\nWe use Docker-in-Docker for most of these jobs, so caching is a bit tricky, and we have an [issue tracking this](https://gitlab.com/gitlab-org/gitlab-ce/issues/17861).\n\n>What does GitLab use to create the container image?\n\nAuto DevOps uses Herokuish and Heroku buildpacks to automatically detect and build the application into a Docker image. If you add a Dockerfile to your repo, GitLab will use docker build to create a Docker image.\n\n>Does the GKE/Kubenetes integration require the GitLab installation to be publicly accessible from the internet? Or will it work just as well if the GitLab server is private?\n\nIt does not, but if you deploy a runner to the cluster it will need to be able to access the GitLab server to pick up jobs and do its Git clones.\n\n>How does one manage to different `.env` files for different environments with GitLab CI?\n\nIf you define environment variables at the project level, you can specify which ones are available for which environments by following the [documentation on limiting environment scopes](https://docs.gitlab.com/ee/ci/variables/#limiting-environment-scopes-of-secret-variables).\n\n>What do I do when I receive this error: “We could not verify that one of your projects on GCP has billing enabled. Please try again.”\n\nPlease read the second bullet on the [GCP billing on the documentation page](https://docs.gitlab.com/ee/user/project/clusters/#adding-and-creating-a-new-gke-cluster-via-gitlab), which should help ensure that billing is set up for your account.\n\n>Is there a setting to control the number of review apps which are running live at any given time? Worried about cost.\n\nNote that review apps only run on open Merge Requests. If you are using the Auto DevOps template, then once the code is merged, or the MR is closed, the review app shuts down. Today, there’s not a feature to limit the number of review apps, but there are a few options. Review app environments can be manually stopped from both the MR and the environments page. You can also disable review apps altogether.\n\n>What are requirements for installing the one-click applications to the cluster?\n\nHelm Tiller, Ingress, Prometheus, and GitLab Runner don't have any special requirements to install via one-click. The integration takes care to ensure the appropriate container images are used and everything is configured properly. The only prerequisite is to install Helm Tiller first (since it is used to install the other applications.) If you install these applications manually to your cluster, you can learn about the requirements for each on their respective documentation pages.\n\n>Does this replace solutions like Rancher?\n\nIn a nutshell, yes, the GitLab GKE integration provisions and manages clusters on GKE, alleviating the need for Rancher. But this also depends on your needs. You can use GitLab with or without Rancher. For example, if you are using AKS or EKS, then Rancher will provision and manage your cluster automatically, while this requires manual configuration on GitLab.\n\n>What is the current state of installing GitLab on Kubernetes?\n\nGitLab has two Helm charts for installing GitLab on Kubernetes – the GitLab-Omnibus chart and the cloud native GitLab chart.\n\nGitLab-Omnibus: The best way to run GitLab on Kubernetes today, suited for small deployments. The chart is in beta and will be deprecated by the cloud native GitLab chart.\nCloud native GitLab chart: The next generation GitLab chart, currently in alpha. Will support large deployments with horizontal scaling of individual GitLab components. For more information, please visit [the GitLab Helm chart documentation page](https://docs.gitlab.com/charts/).\n\n>How usable is the new Helm chart for GitLab on Kubernetes?\n\nIt is in alpha, and we plan to have a beta available in May/June. We created [an issue](https://gitlab.com/groups/charts/-/epics/17) to note the items we are working to address before beta.\n\n>How can I enable Auto DevOps if I have `gitlab-ci.yml` file already, but for only build and test?\n\nAuto DevOps will use your custom `gitlab-ci.yml` file if it is present in your repo. If there is no file, then Auto DevOps will use the default Auto DevOps template. You can also see the [Auto DevOps template `gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-ci-yml/blob/master/Auto-DevOps.gitlab-ci.yml) and use it as a reference to add/update your `gitlab-ci.yml`. For more information, please visit [the customizing `.gitlab-ci.yml` documentation page](https://docs.gitlab.com/ee/topics/autodevops/#customizing-gitlab-ci-yml).\n\nHave you tried the GitLab + GKE integration? Tweet us [@gitlab](https://twitter.com/gitlab).\n",[1150,1149,9,1127,923],{"slug":1592,"featured":6,"template":688},"gke-webcast-recap-post","content:en-us:blog:gke-webcast-recap-post.yml","Gke Webcast Recap Post","en-us/blog/gke-webcast-recap-post.yml","en-us/blog/gke-webcast-recap-post",{"_path":1598,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1599,"content":1605,"config":1610,"_id":1612,"_type":13,"title":1613,"_source":15,"_file":1614,"_stem":1615,"_extension":18},"/en-us/blog/gko-on-ocp",{"title":1600,"description":1601,"ogTitle":1600,"ogDescription":1601,"noIndex":6,"ogImage":1602,"ogUrl":1603,"ogSiteName":675,"ogType":676,"canonicalUrls":1603,"schema":1604},"How to install and use the GitLab Kubernetes Operator","Follow these step-by-step instructions to set up the GitLab Kubernetes Operator on a Kubernetes cluster.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749682191/Blog/Hero%20Images/GKO-Thumbnail.png","https://about.gitlab.com/blog/gko-on-ocp","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to install and use the GitLab Kubernetes Operator\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Cesar Saavedra\"}],\n        \"datePublished\": \"2021-11-16\",\n      }",{"title":1600,"description":1601,"authors":1606,"heroImage":1602,"date":1607,"body":1608,"category":683,"tags":1609},[809],"2021-11-16","\n\nThe GitLab Kubernetes Operator was released on October 12, 2021.\n\n## What is the GitLab Kubernetes Operator?\n\nThe GitLab Operator allows you to install and run an instance of GitLab in a vanilla Kubernetes or OpenShift cluster. Kubernetes operators increase the reliability and availability of your applications by automating Day 2 operations such as upgrading components, management of data integrity, application reconfiguration, automatic recovery from a failure, and autoscaling.\n\n## Installing the GitLab Kubernetes Operator on an OpenShift Container Platform cluster\n\nIn this short post, we show you how to install and run the GitLab Operator to create a GitLab instance on an OpenShift Container Platform cluster, which we have already preinstalled:\n\n![OCP console](https://about.gitlab.com/images/blogimages/gko-on-ocp/0-ocp-console.png){: .shadow.medium.center.wrap-text}\nThe OpenShift Container Platform console\n{: .note.text-center}\n\nInspecting the running pods of the OpenShift cluster, we see that Prometheus is already being used as the metrics server, which is a prerequisite for the installation of the GitLab Operator:\n\n![Prometheus up and running](https://about.gitlab.com/images/blogimages/gko-on-ocp/1-prometheus-up.png){: .shadow.medium.center.wrap-text}\nPrometheus up and running on cluster\n{: .note.text-center}\n\nAlso, we verify that the gitlab-system namespace does not yet exist:\n\n![gitlab namespace not present](https://about.gitlab.com/images/blogimages/gko-on-ocp/2-no-gitlab-sys-namespace.png){: .shadow.medium.center.wrap-text}\ngitlab-system namespace non-existent\n{: .note.text-center}\n\nAnother prerequisite is cert-manager, which automates the management and issuance of TLS certificates. Let’s use the OpenShift OperatorHub to install and instantiate an instance of cert-manager. We first verify that one is not running. Then we head to the OperatorHub and install the cert-manager Operator:\n\n![cert-manager in OperatorHub](https://about.gitlab.com/images/blogimages/gko-on-ocp/3-cert-mgr-in-operatorhub.png){: .shadow.medium.center.wrap-text}\nInstalling cert-manager using its operator in OperatorHub\n{: .note.text-center}\n\n**NOTE:** Once the GitLab Kubernetes Operator is certified with OpenShift, it will have its own tile in the OperatorHub.\n{: .alert .alert-info}\n\nThen we create an instance of cert-manager by using its newly installed operator:\n\n![cert-manager instance creation](https://about.gitlab.com/images/blogimages/gko-on-ocp/4-create-instance-cert-mgr.png){: .shadow.medium.center.wrap-text}\nCreating an instance of cert-manager using its operator\n{: .note.text-center}\n\nIn preparation of the GitLab Operator installation, we create the namespace gitlab-system, under which all of the GitLab resources will be:\n\n![gitlab-system namespace creation](https://about.gitlab.com/images/blogimages/gko-on-ocp/5-create-gitlab-sys-namespace.png){: .shadow.medium.center.wrap-text}\nCreating the gitlab-system namespace\n{: .note.text-center}\n\nTo install the GitLab Operator, we define two environment variables: one is to set the version of the GitLab Operator we want to use and the other one is to set the platform for which we are targeting the Operator. In this case, it is OpenShift. We then apply the GitLab Operator Custom Resource Definition or CRD to the cluster, which creates the operator, by entering the following command:\n\n```\nexport GL_OPERATOR_VERSION=\"0.1.0\" \nexport PLATFORM=\"openshift\"\nkubectl apply -f https://gitlab.com/api/v4/projects/18899486/packages/generic/gitlab-operator/${GL_OPERATOR_VERSION}/gitlab-operator-${PLATFORM}-${GL_OPERATOR_VERSION}.yaml\n```\n\nAnd here's is an example screenshot of what the output of this command would be like:\n\n![application of the CRD to the cluster](https://about.gitlab.com/images/blogimages/gko-on-ocp/6-applying-the-crd.png){: .shadow.medium.center.wrap-text}\nApplying the GitLab Kubernetes Operator to the OpenShift cluster\n{: .note.text-center}\n\nAs we watch the pods in the gitlab-system namespace, we see the creation of two pods for the gitlab-controller-manager:\n\n![operator pods](https://about.gitlab.com/images/blogimages/gko-on-ocp/7-watching-operator-pods-creation.png){: .shadow.medium.center.wrap-text}\nGitLab Kubernetes Operator pods being created on the OpenShift cluster\n{: .note.text-center}\n\nThe GitLab Kubernetes Operator is now installed on the OpenShift Container Platform cluster. Next, we need to use this newly installed operator to create an instance of GitLab.\n\n## Creating a GitLab instance on the cluster using the GitLab Kubernetes Operator\n\nTo create an instance of GitLab, we create a Custom Resource file called mygitlab.yaml to provide information, such as domain name and certmanager issuer email, for the GitLab Operator to use during the creation of the GitLab instance. Here is a parameterized example of the contents for this file:\n\n```\napiVersion: apps.gitlab.com/v1beta1\nkind: GitLab\nmetadata:\n  name: gitlab\nspec:\n  chart:\n    version: \"[REPLACE WITH THE CHART VERSION]\"\n    values:\n      global:\n        hosts:\n          domain: [REPLACE WITH YOUR DOMAIN NAME]\n        ingress:\n          configureCertmanager: true\n      certmanager-issuer:\n        email: [REPLACE WITH YOUR EMAIL]\n```\n\nAnd here is an example screenshot of what this file would look like with actual values for the parameters:\n\n![creating-gitlab-yaml-file](https://about.gitlab.com/images/blogimages/gko-on-ocp/8-creating-mygitlab-yaml.png){: .shadow.small.center.wrap-text}\nCreating mygitlab.yaml, the custom resource file\n{: .note.text-center}\n\nWe then apply the Custom Resource to the cluster. This action will kickstart the creation of all the pods needed for the instantiation of a GitLab instance on the cluster:\n\n![applying the custom resource to the cluster](https://about.gitlab.com/images/blogimages/gko-on-ocp/9-applying-the-cr.png){: .shadow.medium.center.wrap-text}\nApplying the custom resource file to the cluster\n{: .note.text-center}\n\nAfter a few minutes, when the GitLab instance is up and running, we obtain its external IP address from the nginx ingress controller installed by the GitLab Operator by entering the following command:\n\n> kubectl -n gitlab-system get services -o wide gitlab-nginx-ingress-controller\n\nHere's an example screenshot of its output:\n\n![getting the external ip](https://about.gitlab.com/images/blogimages/gko-on-ocp/10-get-external-ip.png){: .shadow.medium.center.wrap-text}\nObtaining the external IP address for our newly created GitLab instance\n{: .note.text-center}\n\nWe use this IP address to create DNS A records to map the DNS names of three (minio, registry, and gitlab) of the GitLab instance subsystems to it. Here is a snapshot for the gitlab one (you need to do the same for the minio and registry subsystems):\n\n![creating dns record](https://about.gitlab.com/images/blogimages/gko-on-ocp/11-creating-dns-record.png){: .shadow.medium.center.wrap-text}\nCreating DNS A record for the gitlab subsystem\n{: .note.text-center}\n\n**NOTE:** I owned the domain ocpgitlab.com. You would use a domain that you own.\n{: .alert .alert-info}\n\n## Logging in to the newly created instance running on the OpenShift Container Platform cluster\n\nBefore logging in to our newly created GitLab instance running on OpenShift Container Platform, we need to obtain the initial root password, which is a secret stored under the gitlab-system namespace. You obtain the initial root password for the newly created GitLab instance by entering the following command:\n\n> kubectl -n gitlab-system get secret gitlab-gitlab-initial-root-password -ojsonpath='{.data.password}' \\| base64 --decode ; echo\n\nAt this moment, we can point our browser to our newly created GitLab instance on OpenShift and login as root:\n\n![logging in to GitLab](https://about.gitlab.com/images/blogimages/gko-on-ocp/13-log-in-to-gitlab.png){: .shadow.medium.center.wrap-text}\nLogging in to the newly created GitLab instance running on the OpenShift Container Platform cluster\n{: .note.text-center}\n\nThat’s it!\n\n## Conclusion\n\nWe have shown you how to install and run the GitLab Operator to create a GitLab instance on an OpenShift Container Platform cluster. View [this demo](https://youtu.be/sEBnuhzYD2I) to see how this feature works.\n\n## Read more on Kubernetes\n\n- [Threat modeling the Kubernetes Agent: from MVC to continuous improvement](/blog/threat-modeling-kubernetes-agent/)\n\n- [How to deploy the GitLab Agent for Kubernetes with limited permissions](/blog/setting-up-the-k-agent/)\n\n- [A new era of Kubernetes integrations on GitLab.com](/blog/gitlab-kubernetes-agent-on-gitlab-com/)\n\n- [Understand Kubernetes terminology from namespaces to pods](/blog/kubernetes-terminology/)\n\n- [What we learned after a year of GitLab.com on Kubernetes](/blog/year-of-kubernetes/)\n\n",[9,984,232],{"slug":1611,"featured":6,"template":688},"gko-on-ocp","content:en-us:blog:gko-on-ocp.yml","Gko On Ocp","en-us/blog/gko-on-ocp.yml","en-us/blog/gko-on-ocp",{"_path":1617,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1618,"content":1624,"config":1630,"_id":1632,"_type":13,"title":1633,"_source":15,"_file":1634,"_stem":1635,"_extension":18},"/en-us/blog/google-cloud-next-anthos-kubernetes",{"title":1619,"description":1620,"ogTitle":1619,"ogDescription":1620,"noIndex":6,"ogImage":1621,"ogUrl":1622,"ogSiteName":675,"ogType":676,"canonicalUrls":1622,"schema":1623},"Google Cloud Next: Doubling down on Kubernetes and multi-cloud","Everything you need to know from last week’s big event.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749668514/Blog/Hero%20Images/multi-cloud-future.jpg","https://about.gitlab.com/blog/google-cloud-next-anthos-kubernetes","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Google Cloud Next: Doubling down on Kubernetes and multi-cloud\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Melissa Smolensky\"}],\n        \"datePublished\": \"2019-04-16\",\n      }",{"title":1619,"description":1620,"authors":1625,"heroImage":1621,"date":1627,"body":1628,"category":300,"tags":1629},[1626],"Melissa Smolensky","2019-04-16","\nLast week at Google Next we saw Google bet big on Kubernetes. Google announced Anthos,\na multi-cloud platform based on Kubernetes, as well as Cloud Run, Google Cloud’s commercial Knative offering.\nThe key technology at the center of these two big announcements is Kubernetes.\nAs [Janakiram MSV](https://twitter.com/janakiramm) stated in a [Forbes article](https://www.forbes.com/sites/janakirammsv/2019/04/14/everything-you-want-to-know-about-anthos-googles-hybrid-and-multi-cloud-platform/#68ffc6d05b66) in regards to Anthos,\n\n> The core theme of Anthos is application modernization. Google envisages a future where all enterprise applications will run on Kubernetes.\n\nAnd in his [New Stack article](https://thenewstack.io/how-google-cloud-run-combines-serverless-with-containers/) about Cloud Run,\n\n> Like the way it offered a managed Kubernetes service before any other provider, Google moved fast in exposing Knative through Cloud Run to developers.\n\nFor a quick overview of the news at Google Next, [Brandon Jung](https://twitter.com/brandoncjung),\nVP of Alliances at GitLab, gives a quick recap of the news and how it impacts GitLab. Take a look.\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/teRaXAPbfoA\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nLaunched by Google in 2014 at the first DockerCon, Kubernetes has become the de facto standard\nfor container orchestration. This May, 12,000 people will gather at KubeCon Barcelona to\nlearn how to implement and use Kubernetes to drive forward cloud native application development within their organizations.\n\nHere at GitLab we embraced Kubernetes early on as well, and we are continuing to take our\ndedication further, putting the power of Kubernetes in the developer workflow.\nEven the CNCF uses GitLab to provide cross-project\ncontinuous integration and interoperability testing.\n\n## Kubernetes throughout every step of the software development lifecycle\n\n“By allowing people to quickly connect Kubernetes clusters to their projects we are helping many\nenterprises embrace the cloud native way of building applications,” says Sid Sijbrandij, CEO at GitLab.\n“By providing a single application we allow enterprise developer and operations teams to embrace\nKubernetes every step of the way in their software development process.\nWe’ve seen a large financial institution go from a single build every two weeks to over 1,000\nself-served builds a day using GitLab. It is wonderful to see the scale we can unlock for organizations\nby providing access to Kubernetes in the developer workflow.”\n\n## GitLab plus Kubernetes\n\nIf you are looking to get started using [Kubernetes with GitLab](/solutions/kubernetes/),\nyou can easily connect any existing Kubernetes cluster on any platform to GitLab by using\nGitLab’s native Kubernetes integration. GitLab even makes it easy to set up and configure new\nclusters with just a few clicks using the Google Kubernetes Engine (GKE) integration.\nOnce connected, teams can install managed applications like Helm Tiller, Ingress,\nand Prometheus to their cluster with a single click in the GitLab interface.\nConnected clusters are available as a deploy target from GitLab CI/CD and are monitored\nusing GitLab’s bundled Prometheus capabilities.\n\nWe love seeing the community embrace GitLab and Kubernetes.\n\n\u003Cblockquote class=\"twitter-tweet\" data-lang=\"en\">\u003Cp lang=\"en\" dir=\"ltr\">getting back to grips with \u003Ca href=\"https://twitter.com/hashtag/GitLab?src=hash&amp;ref_src=twsrc%5Etfw\">#GitLab\u003C/a> CICD with \u003Ca href=\"https://twitter.com/hashtag/Terraform?src=hash&amp;ref_src=twsrc%5Etfw\">#Terraform\u003C/a> jobs and knocked up a \u003Ca href=\"https://twitter.com/hashtag/Kubernetes?src=hash&amp;ref_src=twsrc%5Etfw\">#Kubernetes\u003C/a> cluster for the runner! \u003Ca href=\"https://twitter.com/hashtag/devops?src=hash&amp;ref_src=twsrc%5Etfw\">#devops\u003C/a> \u003Ca href=\"https://twitter.com/hashtag/devoops?src=hash&amp;ref_src=twsrc%5Etfw\">#devoops\u003C/a> \u003Ca href=\"https://twitter.com/hashtag/nomorejenkins?src=hash&amp;ref_src=twsrc%5Etfw\">#nomorejenkins\u003C/a> \u003Ca href=\"https://twitter.com/hashtag/SRE?src=hash&amp;ref_src=twsrc%5Etfw\">#SRE\u003C/a> \u003Ca href=\"https://twitter.com/hashtag/GCP?src=hash&amp;ref_src=twsrc%5Etfw\">#GCP\u003C/a>\u003C/p>&mdash; Ferris Hall (@Ferrish07) \u003Ca href=\"https://twitter.com/Ferrish07/status/1106252265218703360?ref_src=twsrc%5Etfw\">March 14, 2019\u003C/a>\u003C/blockquote>\n\u003Cscript async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\">\u003C/script>\n\n\u003Cblockquote class=\"twitter-tweet\" data-lang=\"en\">\u003Cp lang=\"en\" dir=\"ltr\">I&#39;ve just posted a little experience report. I&#39;m now using \u003Ca href=\"https://twitter.com/hashtag/Kubernetes?src=hash&amp;ref_src=twsrc%5Etfw\">#Kubernetes\u003C/a>  to spread my build load, thanks to \u003Ca href=\"https://twitter.com/gitlab?ref_src=twsrc%5Etfw\">@gitlab\u003C/a> and \u003Ca href=\"https://twitter.com/GCPcloud?ref_src=twsrc%5Etfw\">@GCPcloud\u003C/a>. \u003Ca href=\"https://t.co/KGQ9kyEEP5\">https://t.co/KGQ9kyEEP5\u003C/a>\u003C/p>&mdash; Paul Hicks (@tenwit) \u003Ca href=\"https://twitter.com/tenwit/status/1104828372197113856?ref_src=twsrc%5Etfw\">March 10, 2019\u003C/a>\u003C/blockquote>\n\u003Cscript async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\">\u003C/script>\n\n\u003Cblockquote class=\"twitter-tweet\" data-lang=\"en\">\u003Cp lang=\"pl\" dir=\"ltr\">GitLab CI/CD &amp;&amp; Kubernetes by Bruno Fonseca \u003Ca href=\"https://t.co/ZDymOsbKfc\">https://t.co/ZDymOsbKfc\u003C/a>\u003C/p>&mdash; Paulo George Bezerra (@paulobezerr) \u003Ca href=\"https://twitter.com/paulobezerr/status/1108049894877659136?ref_src=twsrc%5Etfw\">March 19, 2019\u003C/a>\u003C/blockquote>\n\u003Cscript async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\">\u003C/script>\n\nCover image by [Cody Schroeder](https://unsplash.com/@codyrs) on [Unsplash](https://unsplash.com/photos/L99UKlcUBJY?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\n{: .note}\n",[727,278,1150,1149,9],{"slug":1631,"featured":6,"template":688},"google-cloud-next-anthos-kubernetes","content:en-us:blog:google-cloud-next-anthos-kubernetes.yml","Google Cloud Next Anthos Kubernetes","en-us/blog/google-cloud-next-anthos-kubernetes.yml","en-us/blog/google-cloud-next-anthos-kubernetes",{"_path":1637,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1638,"content":1644,"config":1651,"_id":1653,"_type":13,"title":1654,"_source":15,"_file":1655,"_stem":1656,"_extension":18},"/en-us/blog/google-gitlab-serverless-webinar",{"title":1639,"description":1640,"ogTitle":1639,"ogDescription":1640,"noIndex":6,"ogImage":1641,"ogUrl":1642,"ogSiteName":675,"ogType":676,"canonicalUrls":1642,"schema":1643},"Container apps on serverless: Write once, deploy anywhere","Containers, serverless, and microservices, oh my! Cut to the chase and learn how to write apps once and deploy anywhere with emerging technologies.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749666851/Blog/Hero%20Images/gitlab-serverless-blog.png","https://about.gitlab.com/blog/google-gitlab-serverless-webinar","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Write once, deploy anywhere: Containerized applications on modern serverless platforms\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Tina Sturgis\"}],\n        \"datePublished\": \"2019-06-13\",\n      }",{"title":1645,"description":1640,"authors":1646,"heroImage":1641,"date":1648,"body":1649,"category":300,"tags":1650},"Write once, deploy anywhere: Containerized applications on modern serverless platforms",[1647],"Tina Sturgis","2019-06-13","\n\nUsing containers has become standard practice in app development today. We all get the value of why you want to build with containers. But as a developer, why should you care about [serverless](/topics/serverless/)? It’s simple, you can eliminate worry about the infrastructure that your app is going to run on and focus on the impact of the app itself. Specifically the business logic of how the app will interact with things like the end users and/or operating systems.\n\nThe concepts of serverless quickly move the conversation towards one around a microservices architecture. As we move away from building applications in a monolith, moving towards serverless and eliminating the need to worry about that infrastructure begin to make a lot more sense.\n\nSo now, how do we take these concepts that we hear and/or read about that increase velocity, flexibility, and scalability, and put them into action for your own application development?\n\nFind out at our webinar, \"Running containerized applications on modern serverless platforms\" on Jun. 25, 2019 with GitLab and Google experts. We'll take a deep dive into how new and emerging technologies like Kubernetes, Knative, Cloud Run, and GitLab Serverless can provide great stability and scalability while lowering costs and increasing the pace of innovation.\n\n[Reserve your spot.](https://webinars.devops.com/running-containerized-applications-on-modern-serverless-platforms)\n{: .alert .alert-gitlab-purple .text-center}\n",[1149,685,232,108,9],{"slug":1652,"featured":6,"template":688},"google-gitlab-serverless-webinar","content:en-us:blog:google-gitlab-serverless-webinar.yml","Google Gitlab Serverless Webinar","en-us/blog/google-gitlab-serverless-webinar.yml","en-us/blog/google-gitlab-serverless-webinar",{"_path":1658,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1659,"content":1665,"config":1670,"_id":1672,"_type":13,"title":1660,"_source":15,"_file":1673,"_stem":1674,"_extension":18},"/en-us/blog/google-next-2018-recap",{"title":1660,"description":1661,"ogTitle":1660,"ogDescription":1661,"noIndex":6,"ogImage":1662,"ogUrl":1663,"ogSiteName":675,"ogType":676,"canonicalUrls":1663,"schema":1664},"Google Next 2018 Recap","Several GitLab team-members participated in Google Next in San Francisco. Here’s a recap of what went on.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749679821/Blog/Hero%20Images/melody-meckfessel-gitlab-google-next-keynote.png","https://about.gitlab.com/blog/google-next-2018-recap","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Google Next 2018 Recap\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"William Chia\"}],\n        \"datePublished\": \"2018-07-27\",\n      }",{"title":1660,"description":1661,"authors":1666,"heroImage":1662,"date":1667,"body":1668,"category":300,"tags":1669},[1343],"2018-07-27","\n\n## Google Partner Award Winner for Innovative Solution in Developer Ecosystem\n\nGoogle's Partner Summit kicked off a day before the broader Next conference started. At the summit, we were honored to receive the Google Cloud Partner Award for Innovative Solution in Developer Ecosystem for the [tight integration with GKE](/partners/technology-partners/google-cloud-platform/) we released earlier this year. Of course, we decided to take some fun photos with the cloud logo.\n\n![Sid Sijbrandij and Google execs](https://about.gitlab.com/images/blogimages/google-next-2018/sid-sijbrandij-google-execs.jpg){: .shadow.large.center}\n\n![Sid Sijbrandij and Google tech partner team](https://about.gitlab.com/images/blogimages/google-next-2018/sid-sijbrandij-google-tech-partner-team.jpg){: .shadow.large.center}\n\n![Eliran Mesika with GitLab's award + GitLab team with award](https://about.gitlab.com/images/blogimages/google-next-2018/eliran-mesika-gitlab-google-award-team.jpg){: .large.center}\n\n## Launch partner for GCP Marketplace with Kubernetes Apps\n\n![GCP Marketplace launch partners at Google Next](https://about.gitlab.com/images/blogimages/google-next-2018/gcp-marketplace-launch-partners-google-next.jpg){: .shadow.medium.center}\n\nWhile the GCP Marketplace announcement went out a few days before the show, there was still [a lot of buzz about it at Google Next](https://www.youtube.com/watch?v=C6koWw0r07Y&amp=&t=28m29s). In addition to traditional apps, which deploy VMs on Compute Engine, the new GCP Marketplace now supports Kubernetes apps, which deploy to a Kubernetes cluster running on Google Kubernetes Engine. We were happy to be a launch partner, offering the ability to [install GitLab via the GCP Marketplace](/blog/install-gitlab-one-click-gcp-marketplace/) on day one.\n\n## Serverless, Knative, and Istio\n\n[Knative](https://cloud.google.com/knative/) and [Istio](https://istio.io/) are two new projects announced during the show that we're excited about. Knative enables \"serverless\" workloads on Kubernetes while Istio is a service mesh for microservices. Check out [Josh](/company/team/#joshlambert) chatting live with [Sid](/company/team/#sytses) from the show (where Wi-Fi was a bit choppy) about serverless, Knative, and Istio, and how these technologies can potentially tie in with GitLab.\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/k1jK4F4NoBw\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\n## Google Cloud Build + GitLab CI/CD\n\nOne of the key announcements from the show was the introduction of Google Cloud Build, a CI/CD tool for GCP. Many folks asked us if we saw this as competitive to GitLab CI/CD, and how that would affect our partnership with Google. First and foremost, GitLab supports a multi-cloud strategy. We partner with all of the major cloud vendors to ensure GitLab CI/CD can support multi-cloud deployments. Many cloud vendors have their own CI/CD tooling, like AWS Code Deploy or IBM Cloud Pipelines. For us, Cloud Build is just another point of collaboration. In fact, our own [Josh Lambert](/company/team/#joshlambert) teamed up with [Christopher Sanson](https://www.linkedin.com/in/christophersanson/) to create a GitLab + Google demo for Christopher's session, \"CI/CD for Hybrid and Multi-Cloud Customers.\"\n\n![Christopher Sanson demos GitLab CI/CD with Cloud Build](https://about.gitlab.com/images/blogimages/google-next-2018/christopher-sanson-gitlab-cicd.jpg){: .shadow.medium.center}\n\nFirst, Christopher showed how to use GitLab as your code repo with Cloud Build as your CI/CD connected up via webhooks to Cloud Functions. Here's a link to some [sample code for setting up a Cloud Function to trigger cloud build from GitLab](https://gitlab.com/joshlambert/cloud-function-trigger) if you'd like to try it out yourself.\n\nThen Christopher showed how to use GitLab CI/CD and GitLab container registry while offloading the infrastructure build to Google Cloud Build. Using Google Cloud Build together with GitLab CI/CD is one way to overcome some of the security problems of docker-in-docker (e.g. requires privileged containers). Check out the video below to see it in action. Additionally, here's an example ruby app with a [sample configuration for connecting Gitlab CI/CD to Cloud Build](https://gitlab.com/joshlambert/minimal-ruby-app/merge_requests/1/diffs).  \n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/IUKCbq1WNWc?start=1324\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\n\n## GitLab.com is migrating to GCP\n\n![Melody Meckfessel talks GitLab GCP migration during keynote](https://about.gitlab.com/images/blogimages/google-next-2018/melody-meckfessel-gitlab-google-next-keynote.png){: .shadow.medium.center}\n\n>\"Our friends at GitLab have created a complete open source DevOps stack\" - [Melody Meckfessel](https://www.linkedin.com/in/melodymeckfessel/), Vice President of Engineering, Google Cloud Platform\n\nAs part of our plans to make GitLab.com a rock solid, enterprise-ready SaaS offering, we are migrating from Azure to Google Cloud Platform. We’ve been carefully planning this migration for many months and are now very close to executing with a target migration date of August 11. Melody Meckfessel talked a bit about our migration during her keynote on Thursday. Check out our previous blog post to read up on the [full details of GitLab’s GCP migration](/blog/gcp-move-update/).  \n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/JQPOPV_VH5w?start=1363\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\n## Talking to you\n\n![William, Mike, and Reb in the GitLab booth](https://about.gitlab.com/images/blogimages/google-next-2018/william-chia-mike-walsh-gitlab-booth-duo.jpg){: .shadow.large.center}\n\nOf course one of our favorite parts of any trade show is getting to meet our users and customers face to face. We love hearing the palpable excitement when you talk about how GitLab is streamlining your toolchain or easing your move to Kubernetes. We love sharing the story with folks who don’t know yet and seeing their faces light up when we tell them GitLab’s not just a version control solution, but an end-to-end DevOps application with built-in project planning, CI/CD, container registry, monitoring, and more. Google Next ’18 was a great show, and we can’t wait to see you next time! Check out the [full list of events](/events) we’ll be at to find one close to you.\n",[278,1149,727,1150,9],{"slug":1671,"featured":6,"template":688},"google-next-2018-recap","content:en-us:blog:google-next-2018-recap.yml","en-us/blog/google-next-2018-recap.yml","en-us/blog/google-next-2018-recap",{"_path":1676,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1677,"content":1683,"config":1689,"_id":1691,"_type":13,"title":1692,"_source":15,"_file":1693,"_stem":1694,"_extension":18},"/en-us/blog/google-next-2018-security-track-recap",{"title":1678,"description":1679,"ogTitle":1678,"ogDescription":1679,"noIndex":6,"ogImage":1680,"ogUrl":1681,"ogSiteName":675,"ogType":676,"canonicalUrls":1681,"schema":1682},"Google Next 2018 security track recap","Here's how one GitLab team-member made the most of the security track at Google Next 2018.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678940/Blog/Hero%20Images/securitygooglenext.jpg","https://about.gitlab.com/blog/google-next-2018-security-track-recap","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Google Next 2018 security track recap\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Jim Thavisouk\"}],\n        \"datePublished\": \"2018-08-10\",\n      }",{"title":1678,"description":1679,"authors":1684,"heroImage":1680,"date":1686,"body":1687,"category":300,"tags":1688},[1685],"Jim Thavisouk","2018-08-10","\nEvery time someone asks me how I like working at GitLab, I say, \"I love it here!\"\nWith our [company culture](https://handbook.gitlab.com/handbook/values/), 100 percent [remote workforce](/company/culture/all-remote/), and [growing team](/jobs/), it's a pleasure\nto work with such a high energy team.\nThe [security department](https://handbook.gitlab.com/handbook/security/#security-department)\nis continually growing -- very fast! We each have our own specialties and bring a diverse selection\nof strong experiences, while working very well together. In my position, I have\nbeen focusing very heavily on policy as code to raise the bar in security here at GitLab. This blog post was inspired by [William Chia](/company/team/#thewilliamchia)'s\n[Google Next 2018 recap](/blog/google-next-2018-recap/). If you haven't read it, I highly recommend it!\n\n## Security highlights of Google Next 2018\n\n### Forseti\n\nI was excited coming into this conference for [Forseti](https://forsetisecurity.org/),\nespecially with the announcement of\n[Forseti 2.0](https://forsetisecurity.org/news/2018/06/11/forseti-2.0-launch.html).\nWe had a [Forseti Hack Day](https://groups.google.com/a/forsetisecurity.org/forum/#!topic/announce/bHy8QCK_AY0)\nthat kicked off a day before the actual conference, which allowed me to interact\nwith Google engineers, product managers, and Forseti customers. For\nanyone who missed Forseti's session from [Chris Law](https://www.linkedin.com/in/chrislaw/),\n[Michael Capicotto](https://www.linkedin.com/in/mcapicotto/), and\n[Marten Van Wezel](https://www.linkedin.com/in/martenvanwezel/), you can check it out\n[the recording](https://www.youtube.com/watch?v=4TrlgbV_VlQ). See [the details for joining the discussion here](https://groups.google.com/a/forsetisecurity.org/forum/#!topic/announce/8OSAB7UEzSY).\n\n### Istio\n\n[\"Istio is platform-independent and designed to run in a variety of environments,\nincluding those spanning Cloud, on-premise, Kubernetes, Mesos, and more.\"](https://istio.io/docs/concepts/what-is-istio/)\nI'm excited to see Istio 1.0, which was just released a few days ago! See [the team's talk](https://youtu.be/eOI2aM9P7-c)\nfrom [Tao Li](https://www.linkedin.com/in/tao-li-1a447935/) and\n[Samrat Ray](https://www.linkedin.com/in/samratray/).\n\n### Best practices\n\nEveryone can use best practices. At Forseti Hack Day, I met [Tom Salmon](https://www.linkedin.com/in/tomcsalmon/)\nwho has vast experience in security. In his [talk](https://www.youtube.com/watch?v=ZQHoC0cR6Qw),\nhe provides a great knowledge base and reference point to best security practices in GCP.\n\n### Sessions are now live\n\nThese were only a few sessions at Google Next, and there are hundreds of others\nto check out. You can find them neatly categorized on\n[YouTube](https://www.youtube.com/channel/UCTMRxtyHoE3LPcrl-kT4AQQ/playlists?flow=grid&view=50&shelf_id=8).\n\n## We'd love to hear your feedback\n\nWe'd love to hear from you on how you use any of these products in your environment.\nOur team is currently working very closely with the Forseti team, and I'm sure they\nwould love to have you join in on the discussion as well. Don't hesitate to\nreach out directly to me by email (jthavisouk@gitlab.com) or join any of these groups to keep a dialogue going\nabout any of these products. We can only help each other in the process.\n",[278,1149,727,1150,9,855],{"slug":1690,"featured":6,"template":688},"google-next-2018-security-track-recap","content:en-us:blog:google-next-2018-security-track-recap.yml","Google Next 2018 Security Track Recap","en-us/blog/google-next-2018-security-track-recap.yml","en-us/blog/google-next-2018-security-track-recap",{"_path":1696,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1697,"content":1702,"config":1708,"_id":1710,"_type":13,"title":1711,"_source":15,"_file":1712,"_stem":1713,"_extension":18},"/en-us/blog/google-next-post",{"title":1698,"description":1699,"ogTitle":1698,"ogDescription":1699,"noIndex":6,"ogImage":782,"ogUrl":1700,"ogSiteName":675,"ogType":676,"canonicalUrls":1700,"schema":1701},"What to check out at Google Cloud Next 2019","Support women who code by stopping by our booth, learn from a host of GitLab experts, and more.","https://about.gitlab.com/blog/google-next-post","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"What to check out at Google Cloud Next 2019\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Mayank Tahilramani\"}],\n        \"datePublished\": \"2019-04-04\",\n      }",{"title":1698,"description":1699,"authors":1703,"heroImage":782,"date":1705,"body":1706,"category":300,"tags":1707},[1704],"Mayank Tahilramani","2019-04-04","\n\nIt’s that time of the year to indulge in all things innovative and new at Google Cloud Next 2019.\nAs an attendee last year, I was excited to learn about Google’s vision on ‘bringing the cloud to you’\nwith a focus on hybrid cloud and unveiling of GKE On-Prem. GitLab’s partnership with Google\nhas grown a lot since we launched our quick and easy [integration with GKE](/partners/technology-partners/google-cloud-platform/)\nlast year and we hope you will come out to see some of the new things we have going on.\n\n### Don't be shy, come say hi 👋\n\nCome visit us at our booth (#S1607), get scanned, and GitLab will donate $5 to your\ncharity of choice: [Rail Girls](http://railsgirls.com/) or [Django Girls](https://djangogirls.org/).\nThis also enters you for a chance to win an iPad Pro!\n\nWhile you're there, we would love to showcase and talk about:\n\n* GitLab’s [AutoDevOps](https://docs.gitlab.com/ee/topics/autodevops/) functionality.\n* Using GitLab to [secure your applications](/stages-devops-lifecycle/secure/).\n* How to get started with [GitLab for GCP on GKE](/partners/technology-partners/google-cloud-platform/) and GKE On-Prem.\n* GitLab [Serverless with Knative](/topics/serverless/) and [Cloud Run](https://cloud.google.com/blog/products/serverless/announcing-cloud-run-the-newest-member-of-our-serverless-compute-stack),\n* ... and much more!\n\n### Sit back, relax, and listen to some of our experts live\n\n* Check out [Brandon Jung](/company/team/#brandoncjung) (VP of Alliances) discuss [GitLab’s move from Azure to GCP](https://cloud.withgoogle.com/next/sf/sessions?session=ARC207) which includes a technical\noverview of the migration as well as lessons learned. Check out our customer case study [here](https://cloud.google.com/customers/gitlab/).\n\n* Come listen to [Kathy Wang](/company/team/#wangkathy) (Senior Director of Security) tell our journey [Towards Zero Trust at GitLab.com](https://cloud.withgoogle.com/next/sf/sessions?session=SEC220) along with key lessons learned. ([You can read more about the evolution of Zero Trust here](/blog/evolution-of-zero-trust/).)\n\n* Learn something new with [Daniel Gruesso](/company/team/#danielgruesso) (Product Manager) showcasing GitLab’s serverless functionality to [Run a consistent serverless platform anywhere with Kubernetes and Knative](https://cloud.withgoogle.com/next/sf/sessions?session=HYB218).\n\n### Get hands on with Qwiklabs\n\nLearn from [Dan Gordon](/company/team/#dbgordon) (Senior Technical Marketing Manager) at our [Spotlight Lab: Introduction to GitLab on GKE](https://cloud.withgoogle.com/next/sf/sessions?session=301353-133371). Here you will have the chance to deploy GitLab on GKE, migrate a GitHub repository into a GitLab Project, and set up a CI/CD pipeline with AutoDevOps to deploy your code to GKE.\n\nSo stop by and say hello!\n\nWe are proud to be a sponsor at this event and would love to see as many of you at our booth (S1607) to discuss GitLab [Serverless](/topics/serverless/) with Knative and Cloud Run, GitLab’s integration with GKE, GitLab AutoDevOps for CI/CD, Security functionalities, as well as GitLab’s support for GKE On-Prem.\n",[901,9,108,685,232,835,855,902],{"slug":1709,"featured":6,"template":688},"google-next-post","content:en-us:blog:google-next-post.yml","Google Next Post","en-us/blog/google-next-post.yml","en-us/blog/google-next-post",{"_path":1715,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1716,"content":1722,"config":1728,"_id":1730,"_type":13,"title":1731,"_source":15,"_file":1732,"_stem":1733,"_extension":18},"/en-us/blog/how-gitlab-can-help-mitigate-deletion-open-source-images-docker-hub",{"title":1717,"description":1718,"ogTitle":1717,"ogDescription":1718,"noIndex":6,"ogImage":1719,"ogUrl":1720,"ogSiteName":675,"ogType":676,"canonicalUrls":1720,"schema":1721},"GitLab helps mitigate Docker Hub's open source image removal","CI/CD and Kubernetes deployments can be affected by Docker Hub tier changes. This tutorial walks through analysis, mitigations, and long-term solutions.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749659883/Blog/Hero%20Images/post-cover-image.jpg","https://about.gitlab.com/blog/how-gitlab-can-help-mitigate-deletion-open-source-images-docker-hub","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How GitLab can help mitigate deletion of open source container images on Docker Hub\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Michael Friedrich\"}],\n        \"datePublished\": \"2023-03-16\",\n      }",{"title":1723,"description":1718,"authors":1724,"heroImage":1719,"date":1725,"body":1726,"category":683,"tags":1727},"How GitLab can help mitigate deletion of open source container images on Docker Hub",[1022],"2023-03-16","Docker, Inc. shared an email update to Docker Hub users that it will [sunset\nFree Team\norganizations](https://www.infoworld.com/article/3690890/docker-sunsets-free-team-subscriptions-roiling-open-source-projects.html).\nIf accounts do not upgrade to a paid plan before April 14, 2023, their\norganization's images may be deleted after 30 days. This change can affect\nopen source organizations that publish their images on Docker Hub, as well\nas consumers of these container images, used in CI/CD pipelines, Kubernetes\ncluster deployments, or docker-compose demo environments. This blog post\ndiscusses tools and features on the GitLab DevSecOps platform to help users\nanalyze and mitigate the potential impact on production environments.\n\n\n_Update (March 20, 2023): Docker, Inc. [published an apology blog\npost](https://www.docker.com/blog/we-apologize-we-did-a-terrible-job-announcing-the-end-of-docker-free-teams/),\nincluding a FAQ, and clarifies that the company will not delete container\nimages by themselves. Maintainers can migrate to a personal account, join\nthe Docker-sponsored open source program, or opt into a paid plan. If open\nsource container image maintainers do nothing, this leads into another\nissue: Stale container images can become a security problem. The following\nblog post can help with security analysis and migration too._\n\n\n_Update (March 27, 2023): On March 24, 2023, Docker, Inc. [published another\nblog\npost](https://www.docker.com/blog/no-longer-sunsetting-the-free-team-plan/)\nannouncing the reversal of the decision to sunset the Free team plan and\nupdated its [FAQ for Free Team\norganization](https://www.docker.com/developers/free-team-faq/). While this\nis a welcome development for the entire community, it is still crucial to\nensure the reliability of your software development lifecycle by ensuring\nredundancies are in place for your container registries, as detailed in this\nblog post._\n\n\n### Inventory of used container images\n\n\nCI/CD pipelines in GitLab can execute jobs in containers. This is specified\nby the [`image` keyword](https://docs.gitlab.com/ee/ci/yaml/#image) in jobs,\njob templates, or as a global\n[`default`](https://docs.gitlab.com/ee/ci/yaml/#default) attribute. For the\nfirst iteration, you can clone a GitLab project locally, and search for the\n`image` string in all CI/CD configuration files. The following example shows\nhow to execute the `find` command on the command line interface (CLI),\nsearching for files matching the name pattern `*ci.yml`, and looking for the\n`image` string in the file content. The command line prints a list of search\npattern matches, and the corresponding file name to the standard output. The\nexample inspects the [project](https://gitlab.com/gitlab-com/www-gitlab-com)\nfor the [GitLab handbook](https://handbook.gitlab.com/handbook/) and\n[website](https://about.gitlab.com/) to analyze whether its CI/CD deployment\npipelines could be affected by the Docker Hub changes.\n\n\n```bash\n\n$ git clone https://gitlab.com/gitlab-com/www-gitlab-com && cd\nwww-gitlab-com\n\n\n$ find . -type f -iname '*ci.yml' -exec sh -c \"grep 'image:' '{}' && echo\n{}\" \\;\n\n  image: registry.gitlab.com/gitlab-org/gitlab-build-images:www-gitlab-com-debian-${DEBIAN_VERSION}-ruby-3.0-node-16\n  image: alpine:edge\n  image: alpine:edge\n  image: debian:stable-slim\n  image: debian:stable-slim\n  image: registry.gitlab.com/gitlab-org/gitlab-build-images:danger\n./.gitlab-ci.yml\n\n```\n\n\nA [discussion on Hacker News](https://news.ycombinator.com/item?id=35168802)\nmentions that \"official Docker images\" are not affected, but this is not\nofficially confirmed by Docker yet. [Official Docker\nimages](https://hub.docker.com/u/library) do not use a namespace prefix,\ni.e. `namespace/imagename` but instead `debian:\u003Ctagname>` for example.\n`registry.gitlab.com/gitlab-org/gitlab-build-images:danger` uses a full URL\nimage string, which includes the image registry server domain,\n`registry.gitlab.com` in the shown example.\n\n\nIf there is no full URL prefix in the image string, this is an indicator\nthat this image could be pulled from Docker Hub by default. There might be\nother infrastructure safety nets put in place, for example a cloud provider\nregistry which caches the Docker Hub images (Google Cloud, AWS, Azure,\netc.).\n\n\n#### Advanced search for images\n\n\nYou can use the [project lint API\nendpoint](https://docs.gitlab.com/ee/api/lint.html#validate-a-projects-ci-configuration)\nto fetch the CI configuration. The following script uses the [python-gitlab\nAPI\nlibrary](https://python-gitlab.readthedocs.io/en/stable/gl_objects/ci_lint.html)\nto implement the API endpoint:\n\n\n1. Collect all projects from either a single project ID, a group ID with\nprojects, or from the instance.\n\n2. Run the `project.ci_lint.get()` method to get a merged yaml configuration\nfor CI/CD from the current GitLab project.\n\n3. Parse the yaml content and print only the job names, and the image keys.\n\n\nThe [full script is located\nhere](https://gitlab.com/gitlab-da/use-cases/gitlab-api/gitlab-api-python/-/blob/main/get_all_cicd_job_images.py),\nand is open source, licensed under MIT.\n\n\n```python\n\n#!/usr/bin/env python\n\n\nimport gitlab\n\nimport os\n\nimport sys\n\nimport yaml\n\n\nGITLAB_SERVER = os.environ.get('GL_SERVER', 'https://gitlab.com')\n\nGITLAB_TOKEN = os.environ.get('GL_TOKEN') # token requires developer\npermissions\n\nPROJECT_ID = os.environ.get('GL_PROJECT_ID') #optional\n\n# https://gitlab.com/gitlab-da/use-cases/docker\n\nGROUP_ID = os.environ.get('GL_GROUP_ID', 65096153) #optional\n\n\n#################\n\n# Main\n\n\nif __name__ == \"__main__\":\n    if not GITLAB_TOKEN:\n        print(\"🤔 Please set the GL_TOKEN env variable.\")\n        sys.exit(1)\n\n    gl = gitlab.Gitlab(GITLAB_SERVER, private_token=GITLAB_TOKEN)\n\n    # Collect all projects, or prefer projects from a group id, or a project id\n    projects = []\n\n    # Direct project ID\n    if PROJECT_ID:\n        projects.append(gl.projects.get(PROJECT_ID))\n\n    # Groups and projects inside\n    elif GROUP_ID:\n        group = gl.groups.get(GROUP_ID)\n\n        for project in group.projects.list(include_subgroups=True, all=True):\n            # https://python-gitlab.readthedocs.io/en/stable/gl_objects/groups.html#examples\n            manageable_project = gl.projects.get(project.id)\n            projects.append(manageable_project)\n\n    # All projects on the instance (may take a while to process)\n    else:\n        projects = gl.projects.list(get_all=True)\n\n    print(\"# Summary of projects and their CI/CD image usage\")\n\n    # Loop over projects, fetch .gitlab-ci.yml, run the linter to get the full translated config, and extract the `image:` setting\n    for project in projects:\n\n        print(\"# Project: {name}, ID: {id}\\n\\n\".format(name=project.name_with_namespace, id=project.id))\n\n        # https://python-gitlab.readthedocs.io/en/stable/gl_objects/ci_lint.html\n        lint_result = project.ci_lint.get()\n\n        data = yaml.safe_load(lint_result.merged_yaml)\n\n        for d in data:\n            print(\"Job name: {n}\".format(n=d))\n            for attr in data[d]:\n                if 'image' in attr:\n                    print(\"Image: {i}\".format(i=data[d][attr]))\n\n        print(\"\\n\\n\")\n\nsys.exit(0)\n\n```\n\n\nThe\n[script](https://gitlab.com/gitlab-de/use-cases/gitlab-api/gitlab-api-python/-/blob/main/get_all_cicd_job_images.py)\nrequires Python (tested with 3.11) and the python-gitlab and pyyaml modules.\nExample on macOS with Homebrew:\n\n\n```shell\n\n$ brew install python\n\n$ pip3 install python-gitlab pyyaml\n\n```\n\n\nYou can execute the script and set the different environment variables to\ncontrol its behavior:\n\n\n```shell\n\n$ export GL_TOKEN=$GITLAB_TOKEN\n\n\n$ export GL_GROUP_ID=12345\n\n$ export GL_PROJECT_ID=98765\n\n\n$ python3 get_all_cicd_job_images.py\n\n\n# Summary of projects and their CI/CD image usage\n\n# Project: Developer Evangelism at GitLab  / use-cases / Docker Use cases  /\nCustom Container Image Python, ID: 44352983\n\n\nJob name: docker-build\n\nImage: docker:latest\n\n\n# Project: Developer Evangelism at GitLab  / use-cases / Docker Use cases  /\nGitlab Dependency Proxy, ID: 44351128\n\n\nJob name: .test-python-version\n\nJob name: image-docker-hub\n\nImage: python:3.11\n\nJob name: image-docker-hub-dep-proxy\n\nImage: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/python:3.11\n\n```\n\n\nPlease verify the script and fork it for your own analysis and mitigation.\nThe missing parts are checking the image URLs, and doing a more\nsophisticated search. The code has been prepared to either check against a\nsingle project, a group with projects, or an instance (this may take very\nlong, use with care).\n\n\nYou can perform a more history-focused analysis by fetching the CI/CD job\nlogs from GitLab and search for the pulled container image to get an\noverview of past Docker executor runs – for example: `Using Docker executor\nwith image python:3.11 ...`. The screenshot shows the CI/CD job logs UI\nsearch – you can automate the search using the GitLab API, and the\n[python-gitlab\nlibrary](https://python-gitlab.readthedocs.io/en/stable/gl_objects/pipelines_and_jobs.html#jobs),\nfor example.\n\n\n![GitLab CI/CD job logs, searching for the `image`\nkeyword](https://about.gitlab.com/images/blogimages/docker-hub-oss-image-deletion-mitigation/cicd_gitlab_job_logs_search_image.png)\n\n\nThis snippet can be used in combination with the code shared for the CI lint\nAPI endpoint. It fetches the job trace logs, and searches for the `image`\nkeyword in the log. The missing parts are splitting the log line by line,\nand extracting the image key information. This is left as an exercise for\nthe reader.\n\n\n```python\n        for job in project.jobs.list():\n            log_trace = str(job.trace())\n\n            print(log_trace)\n\n            if 'image' in log_trace:\n                print(\"Job ID: {i}, URL {u}\".format(i=job.id, u=job.web_url))\n                print(log_trace)\n```\n\n\n### More inventory considerations\n\n\nSimilar to the API script for CI/CD navigating through all projects, you\nwill need to analyze all Kubernetes manifest configuration files – using\neither a pull- or push-based approach. This can be achieved by using the\n[python-gitlab methods to load files from the\nrepository](https://python-gitlab.readthedocs.io/en/stable/gl_objects/projects.html#project-files)\nand searching the content in similar ways. Helm charts use container images,\ntoo, and will require additional analysis.\n\n\nAn additional search possibility: Custom-built container images that use\nDocker Hub images as a source. A project will consist of:\n\n\n1. `Dockerfile` file that uses `FROM \u003Cimagename>`\n\n2. `.gitlab-ci.yml` configuration file that builds container images (using\nDocker-in-Docker, Kaniko, etc.)\n\n\nAn alternative search method for customers is available by using the\n[Advanced\nSearch](https://docs.gitlab.com/ee/user/search/advanced_search.html) through\nthe GitLab UI and API. The following example uses the [scope:\nblobs](https://docs.gitlab.com/ee/api/search.html#scope-blobs-premium-2) to\nsearch for the `FROM` string:\n\n\n```shell\n\n$ export GITLAB_TOKEN=xxxxxxxxx\n\n\n# Search in https://gitlab.com/gitlab-da\n\n/use-cases/docker/custom-container-image-python\n\n\n$ curl --header \"PRIVATE-TOKEN: $GITLAB_TOKEN\"\n\"https://gitlab.com/api/v4/projects/44352983/search?scope=blobs&search=FROM%20filename:Dockerfile*\"\n\n```\n\n\n![Command line output from Advanced Search API, scope blobs, search `FROM`\nin `Dockerfile*` file\nnames.](https://about.gitlab.com/images/blogimages/docker-hub-oss-image-deletion-mitigation/cli_gitlab_advanced_search_api_dockerfile_from.png)\n\n\n## Mitigations and solutions\n\n\nThe following sections discuss potential mitigation strategies, and\nlong-term solutions.\n\n\n### Mitigation: GitLab dependency proxy\n\n\nThe dependency proxy provides a caching mechanism for Docker Hub images. It\nhelps reduce the bandwidth and time required to download and pull the\nimages. It also helped to [mitigate the Docker Hub pull rate limits\nintroduced in\n2020](/blog/minor-breaking-change-dependency-proxy/). The\ndependency proxy can be configured for public and private projects.\n\n\nThe [dependency\nproxy](https://docs.gitlab.com/ee/user/packages/dependency_proxy/) needs to\nbe enabled for a group. It also needs to be enabled by an instance\nadministrator for self-managed environments, if turned off.\n\n\nThe following example creates two jobs: `image-docker-hub` and\n`image-docker-hub-dep-proxy`. The dependency proxy job uses the\n`CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX` CI/CD variable to instruct GitLab\nto store the image in the cache, and only pull it once when not available.\n\n\n```yaml\n\n.test-python-version:\n  script:\n    - echo \"Testing Python version:\"\n    - python --version\n\nimage-docker-hub:\n  extends: .test-python-version\n  image: python:3.11\n\nimage-docker-hub-dep-proxy:\n  extends: .test-python-version\n  image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/python:3.11\n```\n\n\nThe configuration is available in [this\nproject](https://gitlab.com/gitlab-de/use-cases/docker/gitlab-dependency-proxy).\n\n\nThe stored container image is visible at the group level in the `Package and\ncontainer registries > Dependency Proxy` menu.\n\n\n### Mitigation: Container registry mirror\n\n\n[This blog\npost](/blog/mitigating-the-impact-of-docker-hub-pull-requests-limits/)\ndescribes how to run a local container registry mirror. Skopeo from Red Hat\nis another alternative for syncing container image registries, a practical\nexample is described [in this\narticle](https://marcbrandner.com/blog/transporting-container-images-with-skopeo/).\n\n\nThe GitLab Cloud Native installation ([Helm\ncharts](https://docs.gitlab.com/charts/) and\n[Operator](https://docs.gitlab.com/operator/)) use a [mirror of tagged\nimages](https://gitlab.com/gitlab-org/cloud-native/mirror/images) consumed\nby the related projects. Other product stages follow a similar approach, the\n[security scanners are shipped in container\nimages](https://docs.gitlab.com/ee/user/application_security/offline_deployments/#container-registries-and-package-repositories)\nmaintained by GitLab. This also enables self-managed airgapped\ninstallations.\n\n\n### Mitigation: Custom images in GitLab container registry\n\n\nReproducible builds and compliance requirements may have required you to\ncreate custom container images for CI/CD and Kubernetes already. This is\nalso key to verify that no untested and untrusted images are being used in\nproduction. GitLab provides a fully integrated [container\nregistry](https://docs.gitlab.com/ee/user/packages/container_registry/),\nwhich can be used natively within CI/CD pipelines and [GitOps workflows with\nthe agent for\nKubernetes](https://docs.gitlab.com/ee/user/clusters/agent/gitops.html).\n\n\nThe following `Dockerfile` example extends an existing image layer, and\ninstalls additional tools using the Debian Apt package manager.\n\n\n```\n\nFROM python:3.11-bullseye\n\n\nENV DEBIAN_FRONTEND noninteractive\n\n\nRUN apt update && apt -y install git curl jq && rm -rf /var/lib/apt/lists/*\n\n```\n\n\nYou can [use Docker to build container\nimages](https://docs.gitlab.com/ee/ci/docker/using_docker_build.html), and\nalternative options are Kaniko or Podman. On GitLab.com SaaS, you can use\nthe Docker CI/CD template to build and push images. The following example\nmodifies the `docker-build` job to only build the latest tag from the\ndefault branch:\n\n\n```yaml\n\ninclude:\n  - template: Docker.gitlab-ci.yml\n\ndocker-build:\n  stage: build\n  rules:\n    - if: '$CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH || $CI_COMMIT_TAG'\n      #when: manual\n      #allow_failure: true\n```\n\n\nFor this example, we specifically want to provide a Git tag that gets used\nfor the container image tag as well.\n\n\n```\n\n$ git tag 3-11-bullseye\n\n$ git push --tags\n\n```\n\n\nThe image will be available at the GitLab container registry URL and the\nproject namespace path.This path needs to be replaced in all projects that\nuse a Python-based image. You can [create scripts for the GitLab\nAPI](/blog/efficient-devsecops-workflows-hands-on-python-gitlab-api-automation/)\nto update files and create MRs automatically,\n\n\n```\n\nimage:\nregistry.gitlab.com/gitlab-da/use-cases/docker/custom-container-image-python:3-11-bullseye\n\n```\n\n\n_Note: This is a demo project and not actively maintained. Please fork/copy\nit for your own needs._\n\n\n## Observability and security\n\n\nThe [number of failed CI/CD\npipelines](https://docs.gitlab.com/ee/user/analytics/ci_cd_analytics.html)\ncan be a good service level indicator (SLI) to verify whether the\nenvironment is affected by the Docker Hub changes. The same SLI applies for\nCI/CD jobs that build container images, using a `Dockerfile` file, which is\nbased on Docker Hub images (FROM \u003Cimagename>).\n\n\nA similar SLI applies to Kubernetes cluster deployments – if they continue\nto generate failures in GitOps pull or CI/CD push scenarios, additional\nanalysis and actions are required. The pod status `ErrImagePull` and\n[`ImagePullBackOff`](https://kubernetes.io/docs/concepts/containers/images/#imagepullbackoff)\nwill immediately show the problems. The [image pull\npolicy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy)\nshould also be revised – `Always` will immediately cause a problem, while\n`IfNotPresent` will use the local image cache.\n\n\n[This alert rule\nexample](https://awesome-prometheus-alerts.grep.to/rules.html#rule-kubernetes-1-18)\nfor Prometheus observing a Kubernetes cluster can help detect the pod state\nas not healthy.\n\n\n```yaml\n  - alert: KubernetesPodNotHealthy\n    expr: sum by (namespace, pod) (kube_pod_status_phase{phase=~\"Pending|Unknown|Failed\"}) > 0\n    for: 15m\n    labels:\n      severity: critical\n    annotations:\n      summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})\n      description: \"Pod has been in a non-ready state for longer than 15 minutes.\\n  VALUE = {{ $value }}\\n  LABELS = {{ $labels }}\"\n```\n\n\nCI/CD pipeline linters and Git hooks can also be helpful to enforce using a\nGitLab registry URL prefix in all `image` tags, when new updates to CI/CD\nconfigurations are being pushed into merge requests.\n\n\nKubernetes deployment images can be controlled through additional\nintegrations with the [Open Policy Agent\nGatekeeper](https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/)\nor\n[Kyverno](https://kyverno.io/policies/best-practices/restrict_image_registries/restrict_image_registries/).\nKyverno also allows you to [mutate the image registry\nlocation](https://kyverno.io/policies/other/replace_image_registry/replace_image_registry/),\nand redirect the pod image to trusted sources.\n\n\n[Operational container\nscanning](https://docs.gitlab.com/ee/user/clusters/agent/vulnerabilities.html)\nin Kubernetes clusters and [container scanning in CI/CD\npipelines](https://docs.gitlab.com/ee/user/application_security/container_scanning/)\nare recommended. This ensures that all images do not expose security\nvulnerabilities.\n\n\n## Long-term solutions\n\n\nAs a long-term solution, analyze the affected Docker Hub organizations\nimages and match them against your image usage inventory. Some organizations\nhave raised their concerns in [this Docker Hub feedback\nissue](https://github.com/docker/hub-feedback/issues/2314). Be sure to\nidentify critical production CI/CD workflows and replace all external\ndependencies with local maintained images.\n\n\nFork/copy project Dockerfile files from the upstream Git repositories, and\nuse them as the single source of truth for custom container builds. This\nwill also require training and documentation for DevSecOps teams, for\nexample optimizing container images for [efficient CI/CD\npipelines](https://docs.gitlab.com/ee/ci/pipelines/pipeline_efficiency.html).\nMore DevSecOps efficiency tips can be found in my Chemnitz Linux Days talk\nabout \"Efficient DevSecOps Pipelines in a Cloud Native World\"\n([slides](https://go.gitlab.com/RPog2h)).\n\n\n\u003Ciframe\nsrc=\"https://docs.google.com/presentation/d/e/2PACX-1vT3jcfpddKL2jq7leX01QX6S4Y8vfLLBZMz4L1ZHMLY3xzB4IGOOIExODLEzH8YQM1atCNPm07Bw9m_/embed?start=false&loop=true&delayms=3000\"\nframeborder=\"0\" width=\"960\" height=\"569\" allowfullscreen=\"true\"\nmozallowfullscreen=\"true\" webkitallowfullscreen=\"true\">\u003C/iframe>\n\n\nPlease share your ideas and thoughts about Docker Hub change mitigations and\ntools on the [GitLab community forum](https://forum.gitlab.com/). Thank you!\n\n\nCover image by [Roger Hoyles](https://unsplash.com/photos/sTOQyRD8m74) on\n[Unsplash](https://www.unsplash.com)\n\n{: .note}\n",[814,9,835],{"slug":1729,"featured":6,"template":688},"how-gitlab-can-help-mitigate-deletion-open-source-images-docker-hub","content:en-us:blog:how-gitlab-can-help-mitigate-deletion-open-source-images-docker-hub.yml","How Gitlab Can Help Mitigate Deletion Open Source Images Docker Hub","en-us/blog/how-gitlab-can-help-mitigate-deletion-open-source-images-docker-hub.yml","en-us/blog/how-gitlab-can-help-mitigate-deletion-open-source-images-docker-hub",{"_path":1735,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1736,"content":1742,"config":1749,"_id":1751,"_type":13,"title":1752,"_source":15,"_file":1753,"_stem":1754,"_extension":18},"/en-us/blog/how-gitlab-can-help-you-secure-your-cloud-native-applications",{"title":1737,"description":1738,"ogTitle":1737,"ogDescription":1738,"noIndex":6,"ogImage":1739,"ogUrl":1740,"ogSiteName":675,"ogType":676,"canonicalUrls":1740,"schema":1741},"How GitLab improves cloud native application security and protection","In this article, we will show you how GitLab can help you streamline your cloud native application security from a code and operations point of view by providing you with real-world examples.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749664102/Blog/Hero%20Images/gitlab-values-cover.png","https://about.gitlab.com/blog/how-gitlab-can-help-you-secure-your-cloud-native-applications","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How GitLab improves cloud native application security and protection\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Nico Meisenzahl\"}],\n        \"datePublished\": \"2020-08-18\",\n      }",{"title":1737,"description":1738,"authors":1743,"heroImage":1739,"date":1745,"body":1746,"category":1747,"tags":1748},[1744],"Nico Meisenzahl","2020-08-18","\n{::options parse_block_html=\"true\" /}\n\nIn the [cloud-native](/topics/cloud-native/) ecosystem, decisions and changes are made on a rapid basis. Applications get adapted and deployed multiple times a week or even day. Microservices get developed decentralized with different peoples and teams involved. In such an environment, it is crucial to ensure that applications are developed and operated safely. This can be done by shifting security left into the developer lifecycle but also by using DevSecOps to empower operations with enhanced monitoring and protection for the application runtime.\n\nIn this article, I would like to show you how GitLab can help you streamline your application security from a code and operations point of view by providing you with real-world examples. Before we deep dive into the example, let me first introduce you to the [GitLab Secure](https://about.gitlab.com/stages-devops-lifecycle/secure/) and [GitLab Protect](https://about.gitlab.com/stages-devops-lifecycle/govern/) product portfolio which are the foundation for this. GitLab Secure helps developers to enable accurate, automated, and continuous assessment of their applications by proactively identifying vulnerabilities and weaknesses and therefore minimizing security risk. GitLab Protect, on the other hand, supports operations by proactively protecting environments and cloud-native applications by providing context-aware technologies to reduce overall security risk. Both are backed by leading open-source projects that have been fully integrated into developer and operation processes and the GitLab user interface (UI).\n\n## Cloud Native Application Security: The attack\n\nLet’s assume we have an application hosting a web interface that allows a user to provide some input. The application is written in [Golang](https://golang.org/) and executes the input as part of an external operating system command ([os/exec](https://golang.org/pkg/os/exec/)). The application does not contain any validation or security features to validate the input, which allows us to inject additional commands that are also executed in the application environment.\n\nThe application is running as containerized microservices in a Kubernetes cluster. The Kubernetes Cluster is shared across multiple teams and projects, allowing us to inject and read data in another application running next to ours. In our example, we will connect an unsecured Redis instance in a different Namespace and read/write data.\n\nNow let us take a closer look at how GitLab can help us detect the attack, permit its execution, and finally help us find and fix the root cause in our code.\n\n## Container Host Security\n\n[Container Host Security](/stages-devops-lifecycle/govern/) helps us to detect an attack in real-time by monitoring the pod for any unusual activity. It can then alert operations with detailed information on the attack itself.\n\nContainer Host Security is powered by [Falco](https://falco.org/), an open-source runtime security tool that listens to the Linux kernel using eBPF. Falco parses system calls and asserts the stream against a configurable rules engine in real-time. The Falco deployment used by Container Host Security can be deployed and fully managed using [GitLab Managed Apps](https://docs.gitlab.com/ee/update/removals.html).\n\nIn our example, Falco detects the injected redis-cli command, which is used to read/write data into the unsecured Redis instance. \n\n![Container Host Security](https://about.gitlab.com/images/blogimages/2020-08-18-How-GitLab-Can-Help-You-Secure-Your-Cloud-Native-Applications/falco.png)\n\nFalco can now alert operations who can use those valuable insights to define and execute further steps. \n\n## Container Network Security\n\nA first step to permit access to the unsecured Redis instance would be to permit traffic between the application in our Kubernetes cluster. This can be done by using [Container Network Security](/stages-devops-lifecycle/govern/). Container Network Security is again fully managed by [GitLab Managed Apps](https://docs.gitlab.com/ee/update/removals.html) and can also be configured within the GitLab project user interface.\n\nContainer Network Security is powered by [Cilium](https://cilium.io/), an open-source networking plugin for Kubernetes that can be used to implement support for NetworkPolicy resources. [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can be used to detect and block unauthorized network traffic between pods and to/from the Internet.\n\nImplementing Network Policies for our application will block the underlying network traffic generated by the attack. The policies can be enabled within the GitLab project UI:\n\n![Network Policies](https://about.gitlab.com/images/blogimages/2020-08-18-How-GitLab-Can-Help-You-Secure-Your-Cloud-Native-Applications/network-polices.png)\n\n## Web Application Firewall\n\nWith Container Network Security in place, our attack isn’t able to talk to the Redis instance anymore, but it is still possible to execute other network unrelated attacks using the command injection. [Web Application Firewall (WAF)](/stages-devops-lifecycle/govern/) can now help us to increase the security and detect and block the attack at the [Kubernetes Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) level. \n\nThe Web Application firewall is also powered by open-source. It is based on the [ModSecurity](https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/modsecurity/) module, a toolkit for real-time web application monitoring, logging, and access control. It is preconfigured to use the [OWASP’s Core Rule Set](https://www.modsecurity.org/CRS/Documentation/), which provides generic attack detection capabilities. Like the other integrations, Web Application Firewall is also fully managed by GitLab using [GitLab Managed Apps](https://docs.gitlab.com/ee/update/removals.html).\n\nIn our example, the Web Application Firewall detects the attack and is also able to block it:\n\n![Web Application Firewall logs](https://about.gitlab.com/images/blogimages/2020-08-18-How-GitLab-Can-Help-You-Secure-Your-Cloud-Native-Applications/waf-log.png)\n\nBlocking the attack at the Ingress level will help us to deny the traffic before it hits our application. To do so, we can enable the Web Application Firewall blocking mode directly from the GitLab UI:\n\n![WAF settings](https://about.gitlab.com/images/blogimages/2020-08-18-How-GitLab-Can-Help-You-Secure-Your-Cloud-Native-Applications/waf-settings.png)\n\nIn addition to Container Host Security, we could have used the Web Application Firewall to detect the attack using the Thread Monitoring dashboard within our GitLab project:\n\n![Thread Monitoring](https://about.gitlab.com/images/blogimages/2020-08-18-How-GitLab-Can-Help-You-Secure-Your-Cloud-Native-Applications/thread-monitoring.png)\n\nThe Thread Monitoring dashboard also provides us with useful insights and metrics of our enforced Container Network Policy.\n\n## Static Application Security Testing\n\nWe have now successfully protected our application runtime and ensured that no additional attacks can be executed. But we should also find and fix the root cause to ensure that such incidents are not recurring in the future. This is where [Static Application Security Testing (SAST)](/stages-devops-lifecycle/secure/) can help us. Static Application Security Testing can be easily integrated into our project using [GitLab CI/CD](https://docs.gitlab.com/ee/ci/) and then allows us to analyze our [source code](/solutions/source-code-management/) for known vulnerabilities.\n\nIn our case (a Golang application) the code scanning is executed using the open-source project [Golang Security Checker](https://github.com/securego/gosec). The results are displayed in the Security dashboard of our GitLab project for easy access:\n\n![Security Dashboard](https://about.gitlab.com/images/blogimages/2020-08-18-How-GitLab-Can-Help-You-Secure-Your-Cloud-Native-Applications/sec-dashboard.png)\n\nIn our example, the code scan has identified the root cause and provides us with detailed information about the vulnerability, the line of code that needs to be fixed, and the ability to easily create an issue to fix it.\n\n![SAST](https://about.gitlab.com/images/blogimages/2020-08-18-How-GitLab-Can-Help-You-Secure-Your-Cloud-Native-Applications/sast.png)\n\nFinally, of course, we should also talk to the team running the other application to make sure that their Redis instance gets secured too. We should also verify how the other [GitLab Secure](https://about.gitlab.com/stages-devops-lifecycle/secure/) features can help to further improve the overall security of the application.\n\n## GitLab Protect and Secure in action\n\nIf you like to get more insights on GitLab Secure and Protect and want to see it in action, you are welcome to join [Wayne](https://gitlab.com/whaber), [Philippe](https://gitlab.com/plafoucriere) and myself in our session [“Your Attackers Won't Be Happy! How GitLab Can Help You Secure Your Cloud-Native Applications!”](https://gitlabcommitvirtual2020.sched.com/event/dUWw/your-attackers-wont-be-happy-how-gitlab-can-help-you-secure-your-cloud-native-applications) at GitLab Commit where you can gain further insights on Container Host Security, Container Network Security, Web Application Firewall (WAF), and Status Application Security Testing (SAST).\n\nRegister today and join me and others at [GitLab Commit](https://about.gitlab.com/events/commit/) on August 26. GitLab Commit 2020 is a free 24-hour virtual experience filled with practical DevOps strategies shared by leaders in development, operations, and security.\n","devsecops",[727,685,9,835,855],{"slug":1750,"featured":6,"template":688},"how-gitlab-can-help-you-secure-your-cloud-native-applications","content:en-us:blog:how-gitlab-can-help-you-secure-your-cloud-native-applications.yml","How Gitlab Can Help You Secure Your Cloud Native Applications","en-us/blog/how-gitlab-can-help-you-secure-your-cloud-native-applications.yml","en-us/blog/how-gitlab-can-help-you-secure-your-cloud-native-applications",{"_path":1756,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1757,"content":1763,"config":1769,"_id":1771,"_type":13,"title":1772,"_source":15,"_file":1773,"_stem":1774,"_extension":18},"/en-us/blog/how-gitlab-pages-uses-the-gitlab-api",{"title":1758,"description":1759,"ogTitle":1758,"ogDescription":1759,"noIndex":6,"ogImage":1760,"ogUrl":1761,"ogSiteName":675,"ogType":676,"canonicalUrls":1761,"schema":1762},"How GitLab Pages uses the GitLab API to serve content","GitLab Pages is changing the way it reads a project's configuration to speed up booting times and slowly remove its dependency to NFS.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749679634/Blog/Hero%20Images/retrosupply-jLwVAUtLOAQ-unsplash.jpg","https://about.gitlab.com/blog/how-gitlab-pages-uses-the-gitlab-api-to-serve-content","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How GitLab Pages uses the GitLab API to serve content\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Jaime Martínez\"}],\n        \"datePublished\": \"2020-08-03\",\n      }",{"title":1758,"description":1759,"authors":1764,"heroImage":1760,"date":1766,"body":1767,"category":683,"tags":1768},[1765],"Jaime Martínez","2020-08-03","This blog post was originally published on the GitLab Unfiltered\nblog. It was reviewed and republished on\n2020-11-13.\n\n{: .alert .alert-info .note}\n\n\n[GitLab Pages](https://docs.gitlab.com/ee/user/project/pages/) allows you to create and\nhost GitLab project websites from a user account or group for free on\n[GitLab.com](https://www.gitlab.com/) or on your self-managed GitLab\ninstance.\n\n\nIn this post, I will explain how the [GitLab Pages\ndaemon](https://gitlab.com/gitlab-org/gitlab-pages) obtains a domain's\nconfiguration using the\n\nGitLab API, specifically on [GitLab.com](https://www.gitlab.com/).\n\n\n## How does GitLab Pages know where to find your website files?\n\n\nGitLab Pages will use object storage to store the contents of your web site.\nYou can follow the development of this new feature\n[here](https://gitlab.com/groups/gitlab-org/-/epics/3901).\n\n\nCurrently, GitLab Pages uses an NFS shared mount drive to store the contents\nof your website.\n\nYou can define the value of this path by defining the\n[`pages_path`](https://docs.gitlab.com/ee/administration/pages/#change-storage-path)\nin your `/etc/gitlab/gitlab.rb` file.\n\n\nWhen you deploy a website using the `pages:` keyword in your\n`.gitlab-ci.yml` file, a `public` path artifact must be defined, containing\nthe files available for your website. This `public` artifact eventually\nmakes its way into the NFS shared mount.\n\n\nWhen you deploy a website to GitLab Pages a domain will be created based on\nthe [custom Pages domain you have\nconfigured](https://docs.gitlab.com/ee/administration/pages/#configuration).\nFor [GitLab.com](https://www.gitlab.com/), the pages domain is\n`*.gitlab.io`, if you create a project named `myproject.gitlab.io` and\nenable HTTPS, a wildcard SSL certificate will be used.\n\nYou can also [setup a custom\ndomain](https://docs.gitlab.com/ee/user/project/pages/custom_domains_ssl_tls_certification/)\nfor your project, for example `myawesomedomain.com`.\n\n\nFor every project (aka domain) that is served by the Pages daemon, there\nmust be a directory in the NFS shared mount that matches your domain name\nand holds its contents. For example, if we had a project named\n`myproject.gitlab.io`, the Pages daemon would look for your `.html` files\nunder `/path/to/shared/pages/myproject/myproject.gitlab.io/public`\ndirectory.\n\nThis is how GitLab Pages serves the content published by the `pages:`\nkeyword in your CI configuration.\n\n\nBefore [GitLab 12.10](/releases/2020/04/22/gitlab-12-10-released/) was\nreleased, the Pages daemon would rely on a file named `config.json` located\nin your project's directory in the NFS shared mount, that is\n`/path/to/shared/pages/myproject/myproject.gitlab.io/config.json`.\n\nThis file contains metadata related to your project and [custom domain\nnames](https://docs.gitlab.com/ee/user/project/pages) you may have setup.\n\n\n```json\n\n{\n  \"domains\":[\n    {\n      \"Domain\":\"myproject.gitlab.io\"\n    },\n    {\n      \"Domain\": \"mycustomdomain.com\",\n      \"Certificate\": \"--certificate contents--\",\n      \"Key\": \"--key contents--\"\n    }\n  ],\n  \"id\":123,\n  \"access_control\":true,\n  \"https_only\":true\n}\n\n```\n\nGitLab Pages has been a very popular addition to GitLab, and the number of\nhosted websites on GitLab.com has increased over time. We are currently\nhosting over 251,000 websites!\n\nOn start-up, the Pages daemon would [traverse all\ndirectories](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/v1.21.0/app.go#L448)\nin the NFS shared mount and load the configuration of all the deployed Pages\nprojects into memory.\n\nBefore 09-19-2019, the Pages daemon would take [approximately 25 minutes to\nbe ready to serve\nrequests](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/252) per\ninstance on GitLab.com.\n\nAfter upgrading GitLab Pages to version\n[`v1.9.0`](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/185),\nthere were some improvements in some dependencies that reduced booting time\nto approximately five minutes. This was great but not ideal.\n\n\n## GitLab API-based configuration\n\n\nAPI-based configuration was\n[introduced](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/282) in\nGitLab 12.10.\n\nWith API-based configuration, the daemon will start serving content in just\na few seconds after booting.\n\nFor example, for a particular Pages node on GitLab.com, it usually is ready\nto serve content within one minute after starting.\n\n\nOn [GitLab.com](https://www.gitlab.com/), the Pages daemon now sources the\ndomain configuration via an internal API endpoint\n\n`/api/v4/internal/pages?domain=myproject.gitlab.io`.\n\nThis is done on demand per domain and the configuration is cached in memory\nfor a certain period of time to speed up serving content from that Pages\nnode.\n\n\nThe response from the API is very similar to the contents of the\n`config.json` file:\n\n\n```json\n\n{\n    \"certificate\": \"--cert-contents--\",\n    \"key\": \"--key-contents--\",\n    \"lookup_paths\": [\n        {\n            \"access_control\": true,\n            \"https_only\": true,\n            \"prefix\": \"/\",\n            \"project_id\": 123,\n            \"source\": {\n                \"path\": \"myproject/myproject.gitlab.io/public/\",\n                \"type\": \"file\"\n            }\n        }\n    ]\n}\n\n```\n\n\nYou can see that the source type is `file`. This means that the Pages daemon\nwill still serve the contents from the NFS shared mount. We are actively\nworking on removing the NFS dependency from GitLab Pages by [updating the\nGitLab Pages\narchitecture](https://gitlab.com/groups/gitlab-org/-/epics/1316).\n\n\nWe are planning to [transition GitLab pages to object storage instead of\nNFS](https://gitlab.com/groups/gitlab-org/-/epics/3901). This will\nessentially [enable GitLab Pages to run on\nKubernetes](https://gitlab.com/gitlab-org/gitlab/-/issues/39586) in the\nfuture.\n\n\n**Update**:\n\nWe have now [rolled out zip source type on\nGitLab.com](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2808).\nThis is behavior is behind feature flag and it's not the final\nimplementation.\n\nAs of 10-22-2020 we serve about 75% of Pages projects from zip and object\nstorage and we're getting closer to removing the NFS dependency!\n\n\n## Self-managed GitLab instances\n\n\nThe changes to the GitLab Pages architecture were piloted on GitLab.com,\nwhich is possibly the largest GitLab Pages implementation.\n\nOnce all the changes supporting the move to an API-based configuration are\ncompleted, they will be rolled out to self-managed customers.\n\nYou can find more details and the issues we faced while rolling out\nAPI-based configuration in this\n[issue](https://gitlab.com/gitlab-org/gitlab-pages/-/issues/282).\n\n\nIf you can't wait to speed up your Pages nodes startup, we have a potential\nguide in this [issue\ndescription](https://gitlab.com/gitlab-org/gitlab/-/issues/28298#potential-workaround)\nwhich explains how we enabled the API on GitLab.com. However, this method\nwill be removed in the near future.\n\n\n**Update**:\n\nYou can now enable API-based configuration by following [this\nguide](https://docs.gitlab.com/ee/administration/pages/#gitlab-api-based-configuration).\n\n\n## Domain source configuration and API status\n\n\nIn the meantime, we are working toward adding [a new configuration flag for\nGitLab Pages](https://gitlab.com/gitlab-org/gitlab/-/issues/217912) which\nwill allow you to choose the domain configuration source by specifying\n`domain_config_source` in your `/etc/gitlab/gitlab.rb` file.\n\nBy default, GitLab Pages will use the `disk` source configuration the same\nway is used today.\n\n\nIn the background, the Pages daemon will start [checking the API\nstatus](https://gitlab.com/gitlab-org/gitlab-pages/-/merge_requests/304) by\ncalling the `/api/v4/internal/pages/status` endpoint. This will help you\ncheck if the Pages daemon is ready to talk to the GitLab API, especially\nwhen you are [running Pages on a separate\nserver](https://docs.gitlab.com/ee/administration/pages/#running-gitlab-pages-on-a-separate-server).\n\n\nPlease check the [GitLab Pages adminstration\nguide](https://docs.gitlab.com/ee/administration/pages/#troubleshooting) for\nfurther troubleshooting.\n\n\n\u003C!-- image: image-url -->\n\nCover image by [@RetroSupply](https://unsplash.com/@retrosupply) on\n[Unsplash](https://unsplash.com/photos/jLwVAUtLOAQ)\n\n{: .note}\n",[1288,9],{"slug":1770,"featured":6,"template":688},"how-gitlab-pages-uses-the-gitlab-api","content:en-us:blog:how-gitlab-pages-uses-the-gitlab-api.yml","How Gitlab Pages Uses The Gitlab Api","en-us/blog/how-gitlab-pages-uses-the-gitlab-api.yml","en-us/blog/how-gitlab-pages-uses-the-gitlab-api",{"_path":1776,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1777,"content":1782,"config":1788,"_id":1790,"_type":13,"title":1791,"_source":15,"_file":1792,"_stem":1793,"_extension":18},"/en-us/blog/how-to-stream-logs-through-the-gitlab-dashboard-for-kubernetes",{"title":1778,"description":1779,"ogTitle":1778,"ogDescription":1779,"noIndex":6,"ogImage":760,"ogUrl":1780,"ogSiteName":675,"ogType":676,"canonicalUrls":1780,"schema":1781},"How to stream logs through the GitLab Dashboard for Kubernetes","In GitLab 17.2, users can now view Kubernetes pod and container logs directly via the GitLab UI. This tutorial shows how to use this new feature to simplify monitoring Kubernetes infrastructure.","https://about.gitlab.com/blog/how-to-stream-logs-through-the-gitlab-dashboard-for-kubernetes","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to stream logs through the GitLab Dashboard for Kubernetes\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Daniel Helfand\"}],\n        \"datePublished\": \"2024-08-19\",\n      }",{"title":1778,"description":1779,"authors":1783,"heroImage":760,"date":1785,"body":1786,"category":683,"tags":1787},[1784],"Daniel Helfand","2024-08-19","Developers are context-switching more frequently, needing to understand and\nuse multiple tools to accomplish complex tasks. These tools all have\ndifferent user experiences and often do not present all the information\nneeded to successfully develop, troubleshoot, and ship critical features. It\nis challenging enough to release and monitor software changes without also\nneeding to understand so many tools.\n\n\nWith the addition of [pod log streaming through the GitLab Dashboard for\nKubernetes in\nv17.2](https://about.gitlab.com/releases/2024/07/18/gitlab-17-2-released/#log-streaming-for-kubernetes-pods-and-containers),\ndevelopers can go straight from a merge request review to watching a\ndeployment rolled out to Kubernetes. This new feature will:\n\n- allow developers to avoid switching tooling\n\n- ease the process of troubleshooting and monitoring deployments and\npost-deployment application health\n\n- strengthen [GitOps\nworkflows](https://docs.gitlab.com/ee/user/clusters/agent/gitops.html) to\neasily manage application and infrastructure changes\n\n\nThe new feature allows GitLab users to view the logs of pods and containers\ndirectly via the GitLab UI. In previous versions of GitLab, users could\nconfigure a GitLab project to view pods deployed to certain namespaces on an\nassociated cluster. This new feature allows users to further monitor\nworkloads running on Kubernetes without needing to switch to another tool.\n\n\nIn the sections below, you will learn how to use this new feature by adding\na Kubernetes cluster to a GitLab project, deploying a sample workload to a\ncluster, and viewing the logs of this workload running on a cluster. \n\n\n> Need to know the basics of Kubernetes? [Read this quick introductory\nblog](https://about.gitlab.com/blog/kubernetes-the-container-orchestration-solution/).\n\n\n## Configure a GitLab project to view Kubernetes resources\n\n\nBefore proceeding with this section, the following prerequisites are\nrequired:\n\n* a remote Kubernetes cluster (i.e., not running locally on your machine)\n\n* access to a GitLab v17.2 account\n\n* [this\nrepository](https://gitlab.com/gitlab-da/tutorials/cloud-native/gitlab-k8s-log-streaming-example)\nforked to a GitLab group to which you have access\n\n* Helm CLI\n\n* kubectl CLI\n\n\nOnce you have satisfied the prerequisites involved, add an agent\nconfiguration file to the GitLab project you forked. The configuration file\nallows users to control permissions around how GitLab users may interact\nwith the associated Kubernetes cluster.\n\n\nYou can use the configuration file included in this GitLab project by\nchanging the following file: `.gitlab/agents/k8s-agent/config.yaml`. Replace\nthe `\u003CGitLab group>` in the id property shown below with the group where you\nhave forked the example project. This config file will allow [GitLab to\naccess your cluster via an\nagent](https://docs.gitlab.com/ee/user/clusters/agent/user_access.html) that\ncan be installed on your cluster.\n\n\n```yaml\n\nuser_access:\n  access_as:\n    agent: {}\n  projects:\n    - id: \u003CGitLab group>/gitlab-k8s-log-streaming-example\n```\n\n\nOnce the above file is edited, you can commit and push these changes to the\nmain branch of the project. \n\n\n## Add GitLab Kubernetes agent to cluster\n\n\nWith the agent configuration file added, now add the cluster to GitLab by\ninstalling an agent on your cluster. In the GitLab UI, go to your project\nand, on the left side of the screen, select **Operate > Kubernetes\nclusters**. Once on this page, select the **Connect a cluster** button on\nthe right side of the screen. From the dropdown menu, you can then select\nthe agent, which should be `k8s-agent`. Click **Register** to get\ninstructions for how to install the agent on your cluster.\n\n\nThe instructions presented to you after registering the agent will be to run\na helm command that will install the GitLab agent on your cluster. Before\nrunning the command locally, you will want to ensure your Kubernetes context\nis targeting the cluster you want to work with. Once you have verified you\nare using the correct kubeconfig locally, you can run the helm command to\ninstall the agent on your cluster.\n\n\nOnce both pods are running, GitLab should be able to connect to the agent.\nRun the following command to wait for the pods to start up:\n\n\n```shell\n\nkubectl get pods -n gitlab-agent-k8s-agent -w\n\n```\n\n\n## Deploy sample application to your cluster\n\n\nBefore you can view logs of a workload through GitLab, you first need to\nhave something running on your cluster. To do this, you can run the\nfollowing kubectl command locally. \n\n\n```shell\n\nkubectl apply -f\nhttps://gitlab.com/gitlab-da/tutorials/cloud-native/gitlab-k8s-log-streaming-example/-/raw/main/k8s-manifests/k8s.yaml\n\n```\n\n\nAfter the command runs successfully, you are now ready to complete the final\nstep to set up a Kubernetes dashboard via GitLab.\n\n\n## View pod logs through the GitLab UI\n\n\nTo add the Kubernetes dashboard via the GitLab UI, go to your project and,\non the left side of the screen, select **Operate > Environments**. On the\ntop right side of the screen, select the **Create an environment**.\n\n\nNext, you can give your environment a name, select the GitLab agent (i.e.\n`k8s-agent`), and pick a namespace for the Kubernetes dashboard to focus on.\nSince the application is running in the\n`gitlab-k8s-log-streaming-example-dev` namespace, select this option from\nthe namespace dropdown. After naming the environment and selecting the agent\nand namespace, click **Save**.\n\n\nAfter creating the environment, you should now see information about the\napplication’s pods displayed via the GitLab UI.\n\n\n![Kubernetes logs - image\n2](https://res.cloudinary.com/about-gitlab-com/image/upload/v1749676402/Blog/Content%20Images/Screenshot_2024-08-20_at_12.15.08_PM.png)\n\n\nGo to the right side of the screen and click **View Logs** to see logs for\none of the pods associated with the application. \n\n\n![Kubernetes dashboard - image\n1](https://res.cloudinary.com/about-gitlab-com/image/upload/v1749676402/Blog/Content%20Images/Screenshot_2024-08-20_at_12.16.56_PM.png)\n\n\n## Try it out and share feedback\n\n\nThe introduction of pod log streaming in GitLab v17.2 will help GitLab users\nget one step closer to managing complex deployments to Kubernetes, as well\nas monitoring and troubleshooting issues post deployment via a common user\nexperience. We are excited to hear more about users’ experiences with this\nnew enhancement and how it helps improve DevOps workflows around Kubernetes.\nTo share your experience with us, you can open an issue to the [project\nassociated with this\ntutorial](https://gitlab.com/gitlab-da/tutorials/cloud-native/gitlab-k8s-log-streaming-example).\nOr, [comment directly in the Kubernetes log streaming feedback\nissue](https://gitlab.com/gitlab-org/gitlab/-/issues/478379) to report\ninformation to the GitLab engineering team.\n\n\nMore information on getting started with the GitLab Dashboard for Kubernetes\ncan be found in the documentation\n[here](https://docs.gitlab.com/ee/ci/environments/kubernetes_dashboard.html).\n\n\n> To explore the GitLab Dashboard for Kubernetes as well as other more\nadvanced features of GitLab, sign up for [our free trial of GitLab\nUltimate](https://about.gitlab.com/free-trial/).\n",[984,539,9,748],{"slug":1789,"featured":90,"template":688},"how-to-stream-logs-through-the-gitlab-dashboard-for-kubernetes","content:en-us:blog:how-to-stream-logs-through-the-gitlab-dashboard-for-kubernetes.yml","How To Stream Logs Through The Gitlab Dashboard For Kubernetes","en-us/blog/how-to-stream-logs-through-the-gitlab-dashboard-for-kubernetes.yml","en-us/blog/how-to-stream-logs-through-the-gitlab-dashboard-for-kubernetes",{"_path":1795,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1796,"content":1802,"config":1807,"_id":1809,"_type":13,"title":1810,"_source":15,"_file":1811,"_stem":1812,"_extension":18},"/en-us/blog/how-to-use-oci-images-as-the-source-of-truth-for-continuous-delivery",{"title":1797,"description":1798,"ogTitle":1797,"ogDescription":1798,"noIndex":6,"ogImage":1799,"ogUrl":1800,"ogSiteName":675,"ogType":676,"canonicalUrls":1800,"schema":1801},"How to use OCI images as the source of truth for continuous delivery","Discover the benefits of using Open Container Initiative images as part of GitOps workflows and the many features GitLab offers to simplify deployments to Kubernetes.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097601/Blog/Hero%20Images/Blog/Hero%20Images/REFERENCE%20-%20Use%20this%20page%20as%20a%20reference%20for%20thumbnail%20sizes_76Tn5jFmEHY5LFj8RdDjNY_1750097600692.png","https://about.gitlab.com/blog/how-to-use-oci-images-as-the-source-of-truth-for-continuous-delivery","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to use OCI images as the source of truth for continuous delivery\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Daniel Helfand\"}],\n        \"datePublished\": \"2025-02-19\",\n      }",{"title":1797,"description":1798,"authors":1803,"heroImage":1799,"date":1804,"body":1805,"category":876,"tags":1806},[1784],"2025-02-19","Is [GitOps](https://about.gitlab.com/topics/gitops/) still GitOps if you are\nnot using a git repository as your deployment artifact? While git remains\ncentral to GitOps workflows, storing infrastructure definitions as Open\nContainer Initiative (OCI) artifacts in container registries has seen a rise\nin adoption as the source for GitOps deployments. In this article, we will\ndive deeper into the ideas behind this trend and how GitLab features support\nthis enhancement to GitOps workflows.\n\n\n## What is GitOps?\n\n\nThe [OpenGitOps](https://opengitops.dev/) project has defined [four\nprinciples](https://opengitops.dev/#principles) for the practice of GitOps:\n\n- A [system managed by\nGitOps](https://github.com/open-gitops/documents/blob/v1.0.0/GLOSSARY.md#software-system)\nmust have its [desired state expressed\ndeclaratively](https://github.com/open-gitops/documents/blob/v1.0.0/GLOSSARY.md#declarative-description).\n\n- Desired state is stored in a way that enforces immutability and\nversioning, and retains a complete version history.\n\n- Software agents automatically pull the desired state declarations from the\nsource.\n\n- Software agents\n[continuously](https://github.com/open-gitops/documents/blob/v1.0.0/GLOSSARY.md#continuous)\nobserve actual system state and [attempt to apply the desired\nstate](https://github.com/open-gitops/documents/blob/v1.0.0/GLOSSARY.md#reconciliation).\n\n\nAn example of GitOps is storing the Kubernetes manifests for a microservice\nin a GitLab project. Those Kubernetes resources are then continuously\nreconciled by a\n[controller](https://kubernetes.io/docs/concepts/architecture/controller/)\nrunning on the Kubernetes cluster where the microservice is deployed to.\nThis allows engineers to manage infrastructure using the same workflows as\nworking with regular code, such as opening merge requests to make and review\nchanges and versioning changes. GitOps also has operational benefits such as\n[preventing configuration\ndrift](https://about.gitlab.com/topics/gitops/#cicd) and helps engineers\naudit what changes led to certain outcomes with deployments.\n\n\n## Benefits and limitations of git in GitOps workflows\n\n\nWhile git is an essential piece of GitOps workflows, git repositories were\nnot designed to be deployed by GitOps controllers. Git does provide the\nability for engineers to collaborate on infrastructure changes and audit\nthese changes later on, but controllers do not need to download an entire\ngit repository for a successful deployment. GitOps controllers simply need\nthe infrastructure defined for a particular environment.\n\n\nAdditionally, an important piece of the deployment process is to [sign and\nverify\ndeployments](https://docs.sigstore.dev/about/overview/#why-cryptographic-signing)\nto assure deployment changes to an environment are coming from a trusted\nsource. While git commits can be signed and verified by GitOps controllers,\ncommits may also capture other details not related to the deployment itself\n(e.g., documentation changes, updates to other environments, and git\nrepository restructuring) or not enough of the deployment picture as a\ndeployment may consist of multiple commits. This again feels like a case\nthis git feature wasn’t designed for.\n\n\nAnother challenging aspect of git in GitOps workflows is that it can\nsometimes lead to more automation than expected. Soon after merging a change\nto the watched branch, it will be deployed. There are no controls in the\nprocess outside of git. How can you make sure that nothing gets deployed on\na Friday late afternoon? What if teams responsible for deployment do not\nhave permissions to merge changes in certain GitLab projects? Using OCI\nimages adds a pipeline into the process, including all the delivery control\nfeatures, like [approvals or deploy\nfreezes](https://docs.gitlab.com/ee/ci/environments/protected_environments.html).\n\n\n## OCI images\n\n\nThe [Open Container Initiative](https://opencontainers.org/) has helped to\ndefine standards around container formats. While most engineers are familiar\nwith building Dockerfiles into container images, many may not be as familiar\nwith storing Kubernetes manifests in a container registry. Because [GitLab’s\nContainer\nRegistry](https://docs.gitlab.com/ee/user/packages/container_registry/) is\nOCI compliant, it allows for users to push Kubernetes manifests for a\nparticular environment to a container registry. GitOps controllers, such as\n[Flux\nCD](https://about.gitlab.com/blog/why-did-we-choose-to-integrate-fluxcd-with-gitlab/),\ncan use the manifests stored in this OCI artifact instead of needing to\nclone an entire git repository.\n\n\nOften in GitOps workflows, a git repository can include the infrastructure\ndefinitions for all environments that a microservice will be deployed to. By\npackaging the Kubernetes manifests for only a specific environment, Flux CD\ncan download the minimum files needed to carry out a deployment to a\nspecific environment.\n\n\n### Security benefits of using OCI artifacts\n\n\nAs mentioned previously, signing and verifying the artifacts to be deployed\nto an environment adds an additional layer of security for software\nprojects. After Kubernetes manifests are pushed to a container registry, a\ntool like [Sigstore\nCosign](https://docs.sigstore.dev/quickstart/quickstart-cosign/) can be used\nto sign the OCI image with a private key that can be securely stored in a\nGitLab project as a [CI/CD\nvariable](https://docs.gitlab.com/ee/ci/variables/). Flux CD can then use a\npublic key stored on a Kubernetes cluster to verify that a deployment is\ncoming from a trusted source.\n\n\n## Using GitLab to push and sign OCI images\n\n\nGitLab offers many features that help simplify the process of packaging,\nsigning, and deploying OCI images. A common way to structure GitLab projects\nwith GitOps workflows is to have separate GitLab projects for microservices’\ncode and a single infrastructure repository for all microservices. If an\napplication is composed of `n` microservices, this would require having `n\n+1` GitLab projects for an application.\n\n\nThe artifact produced by a code project is usually a container image that\nwill be used to package the application. The infrastructure or delivery\nproject will contain the Kubernetes manifests defining all the resources\nrequired to scale and serve traffic to each microservice. The artifact\nproduced by this project is usually an OCI image used to deploy the\napplication and other manifests to Kubernetes.\n\n\nIn this setup, separation of environments is handled by defining Kubernetes\nmanifests in separate folders. These folders represent environments (e.g.,\ndevelopment, staging, and production) that will host the application. When\nchanges are made to the code project and a new container image is pushed,\nall that needs to be done to deploy these changes via GitLab’s integration\nwith Flux CD is to edit the manifests under the environment folder to\ninclude the new image reference and open a merge request. Once that merge\nrequest is reviewed, approved, and merged, the delivery project’s CI/CD job\nwill push a new OCI image that Flux CD will pick up and deploy to the new\nenvironment.\n\n\n![OCI images - flow\nchart](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097611/Blog/Content%20Images/Blog/Content%20Images/image1_aHR0cHM6_1750097611046.png)\n\n\nSigning an OCI image is as simple as including Cosign in your project’s\nCI/CD job. You can simply generate a new public and private key with Cosign\nby running the commands below locally. Just make sure to log in to your\nGitLab instance with the [glab\nCLI](https://gitlab.com/gitlab-org/cli/#installation) and replace the\n[`PROJECT_ID`] for the Cosign command with your [delivery project’s\nID](https://docs.gitlab.com/ee/user/project/working_with_projects.html#access-a-project-by-using-the-project-id).\n\n\n```\n\nglab auth login\n\ncosign generate-key-pair gitlab://[PROJECT_ID]\n\n```\n\n\nOnce the cosign command runs successfully, you can see the Cosign keys added\nto your project under the CI/CD variables section under the key names\n`COSIGN_PUBLIC_KEY` and `COSIGN_PRIVATE_KEY`.\n\n\n### Example CI/CD job\n\n\nA GitLab CI/CD job for pushing an OCI image will look something like the\nfollowing:\n\n\n```yaml\n\nfrontend-deploy:\n  rules:\n  - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH\n    changes:\n      paths:\n      - manifests/dev/frontend-dev.yaml\n  trigger:\n    include:\n      - component: gitlab.com/components/fluxcd/oci-artifact@0.3.1\n        inputs:\n          version: 0.3.1\n          kubernetes_agent_reference: gitlab-da/projects/tanuki-bank/flux-config:dev\n          registry_image_url: \"oci://$CI_REGISTRY_IMAGE/frontend\"\n          image_tag: dev\n          manifest_path: ./manifests/dev/frontend-dev.yaml\n          flux_oci_repo_name: frontend\n          flux_oci_namespace_name: frontend-dev\n          signing_private_key: \"$COSIGN_PRIVATE_KEY\"\n```\n\n\nThe [GitLab CI/CD\nCatalog](https://about.gitlab.com/blog/ci-cd-catalog-goes-ga-no-more-building-pipelines-from-scratch/)\noffers a GitLab-maintained [CI/CD component for working with OCI artifacts\nand Flux CD](https://gitlab.com/explore/catalog/components/fluxcd). This\ncomponent allows development teams to push Kubernetes manifests as OCI\nimages to GitLab’s Container Registry or an external container registry,\nsign the OCI image using Cosign, and immediately reconcile the newly pushed\nimage via Flux CD.\n\n\nIn the example above, the Flux CD `component` is included in a\n`.gitlab-ci.yml` file of a GitLab project. Using the component’s `inputs`,\nusers can define what registry to push the image to (i.e.,\n`registry_image_url` and `image tag`), the file path to Kubernetes manifests\nthat will be pushed (i.e., `manifest_path`), the cosign private key used to\nsign images (i.e., `signing_private_key`), and the Kubernetes namespace and\nFlux CD\n[OCIRepository](https://fluxcd.io/flux/components/source/ocirepositories/)\nname needed to sync updates to an environment (i.e.,\n`flux_oci_namespace_name` and `flux_oci_repo_name`).\n\n\nThe `kubernetes_agent_reference` allows GitLab CI/CD jobs to inherit the\n`kubeconfig` needed to access a Kubernetes cluster without needing to store\na `kubeconfig` CI/CD variable in each GitLab project. By setting up the\n[GitLab agent for\nKubernetes](https://docs.gitlab.com/ee/user/clusters/agent/), you can\nconfigure all GitLab projects’ CI/CD jobs in a [GitLab\ngroup](https://docs.gitlab.com/ee/user/group/) to inherit permissions to\ndeploy to the Kubernetes cluster.\n\n\nThe agent for Kubernetes context is typically configured wherever you\nconfigure the GitLab Agent for Kubernetes in your GitLab group. It is\ntypically recommended that this be done in the project where Flux CD is\nmanaged. More information on configuring the agent for CI/CD access can be\nfound in our [CI/CD workflow\ndocumentation](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html).\n\n\nThe variables `$COSIGN_PRIVATE_KEY`, `$FLUX_OCI_REPO_NAME`, and\n`$FRONTEND_DEV_NAMESPACE` are values stored as CI/CD variables to easily\naccess and mask these sensitive pieces of data in CI/CD logs. The\n`$CI_REGISTRY_IMAGE` is a variable that GitLab jobs have available by\ndefault that specifies the GitLab project’s container registry.\n\n\n### Deploy OCI images\n\n\nUsing [Flux CD with your GitLab\nprojects](https://docs.gitlab.com/ee/user/clusters/agent/gitops/flux_tutorial.html),\nyou can automate deployments and signing verification for your\nmicroservice’s environments. Once Flux CD is configured to sync from a\nGitLab project, you could add the following Kubernetes [custom resource\ndefinitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)\nto your project to sync your pushed OCI image.\n\n\n```yaml\n\napiVersion: v1\n\nkind: Namespace\n\nmetadata:\n  name: frontend-dev\n  labels:\n    name: frontend-dev\n---\n\napiVersion: bitnami.com/v1alpha1\n\nkind: SealedSecret\n\nmetadata:\n  name: cosign-public-key\n  namespace: frontend-dev\nspec:\n  encryptedData:\n    cosign.pub: AgAKgLf4VbVzJOmr6++k81LlFayx88AELaUQFNOaXmBF4G+fBfBYeABl0skNvMAa1UrPVNSfMIHgFoYHoO96g576a+epk6V6glOI+++XvYbfsygof3GGxe0nL5Qh2b3ge0fNpyd0kTPSjTj0YUhRhKtMGMRSRw1jrwhNcGxCHK+Byibs52v8Np49KsIkeZKbzLdgYABkrv+k0j7hQM+jR180NpG+2UiRvaXpPuogxkbj61FEqWGrJHk8IVyfl3eh+YhoXxOHGDqko6SUC+bUZPDBlU6yKegO0/8Zq3hwulrSEsEjzRZNK+RFVMOLWWuC6h+WGpYhAMcsZPwjjJ/y29KLNa/YeqkN/cdk488QyEFc6ehCxzhH67HxIn2PDa+KkEOTv2TuycGF+Q00jKIizXF+IwLx/oRb3pTCF0AoAY8D8N3Ey+KfkOjsBON7gGID8GbQiJqX2IgIZxFMk0JRzxbRKOEqn+guLd5Shj7CD1a1Mkk0DxBdbqrGv2XNYUaFPI7xd3rZXUJZlnv+fsmwswsiGWRuXwim45HScWzQnfgLAe7tv3spVEGeaO5apl6d89uN21PBQnfE/zyugB//7ZW9tSp6+CSMyc5HynxI8diafqiwKPgvzLmVWRnkvxJijoXicRr3sCo5RudZPSlnjfd7CKdhwEVvLl7dRR4e/XBMdxCzk1p52Pl+3/kJR+LJii5+iwOpYrpVltSZdzc/3qRd19yMpc9PWpXYi7HxTb24EOQ25i21eDJY1ceplDN6bRtop2quzkjlwVeE2i4cEsX/YG8QBtQbop/3fjiAjKaED3QH3Ul0PECS9ARTScSkcOL3I00Xpp8DyD+xH0/i9wCBRDmH3yKX18C8VrMq02ALSnlP7WCVVjCPzubqKx2LPZRxK9EG0fylwv/vWQzTUUwfbPQZsd4c75bSTsTvxqp/UcFaXA==\n  template:\n    metadata:\n      name: cosign-public-key\n      namespace: frontend-dev\n---\n\napiVersion: source.toolkit.fluxcd.io/v1beta2\n\nkind: OCIRepository\n\nmetadata:\n    name: frontend\n    namespace: frontend-dev\nspec:\n    interval: 1m\n    url: oci://registry.gitlab.com/gitlab-da/projects/tanuki-bank/tanuki-bank-delivery/frontend\n    ref:\n        tag: dev\n    verify:\n      provider: cosign\n      secretRef:\n        name: cosign-public-key\n---\n\napiVersion: kustomize.toolkit.fluxcd.io/v1\n\nkind: Kustomization\n\nmetadata:\n    name: frontend\n    namespace: frontend-dev\nspec:\n    interval: 1m\n    targetNamespace: frontend-dev\n    path: \".\"\n    sourceRef:\n        kind: OCIRepository\n        name: frontend\n    prune: true\n```\n\n\nThe\n[`Kustomization`](https://fluxcd.io/flux/components/kustomize/kustomizations/)\nresource allows for further customization of Kubernetes manifests and also\nspecifies which namespace to deploy resources to. The `OCIRepository`\nresource for Flux CD allows users to specify the OCI image repository\nreference and tag to regularly sync from. Additionally, you will notice the\n`verify.provider` and `verify.secretRef` properties. These fields allow you\nto verify that the OCI image deployed to the cluster was signed by the\ncorresponding Cosign private key used in the earlier CI/CD job.\n\n\nThe public key needs to be stored in a [Kubernetes\nsecret](https://kubernetes.io/docs/concepts/configuration/secret/) that will\nneed to be present in the same namespace as the `OCIRepository` resource. To\nhave this secret managed by Flux CD and not store the secret in plain text,\nyou can consider using\n[SealedSecrets](https://fluxcd.io/flux/guides/sealed-secrets/) to encrypt\nthe value and have it be decrypted cluster side by a controller.\n\n\nFor a simpler approach not requiring SealedSecrets, you can [deploy the\nsecret via a GitLab\nCI/CD](https://docs.gitlab.com/ee/user/clusters/agent/getting_started_deployments.html)\njob using the [`kubectl\nCLI`](https://kubernetes.io/docs/reference/kubectl/). In the non-sealed\nsecret approach, you would simply remove the SealedSecret included above and\nrun the job to deploy the public key secret before running the job to push\nthe new OCI image. This will make sure the secret is stored securely in\nGitLab and make sure the secret can be accessed on the cluster by the\nOCIRepository. While this approach is a bit simpler, just note this is not a\nsuitable approach for managing secrets in production.\n\n\n## The benefits of OCI, GitLab, and GitOps\n\n\nOCI artifacts allow for GitOps teams to take deployments even further with\nadded security benefits and allowing for deployments to be minimal. Users\nstill gain all the benefits offered by git as far as having a source of\ntruth for infrastructure and collaborating on projects. OCI images add a\npackaging approach that improves the deployment aspect of GitOps.\n\n\nGitLab continues to learn from our customers and the cloud native community\non building experiences that help simplify GitOps workflows. To get started\nusing some of the features mentioned in this blog, you can sign up for a\n[free trial of GitLab\nUltimate](https://about.gitlab.com/free-trial/). We would also love to hear\nfrom users about their experiences with these tools, and you can provide\nfeedback in the [community\nforum](https://forum.gitlab.com/t/oci-images-as-source-of-truth-for-gitops-with-gitlab/120965).\n",[108,835,9,539,1384,748],{"slug":1808,"featured":6,"template":688},"how-to-use-oci-images-as-the-source-of-truth-for-continuous-delivery","content:en-us:blog:how-to-use-oci-images-as-the-source-of-truth-for-continuous-delivery.yml","How To Use Oci Images As The Source Of Truth For Continuous Delivery","en-us/blog/how-to-use-oci-images-as-the-source-of-truth-for-continuous-delivery.yml","en-us/blog/how-to-use-oci-images-as-the-source-of-truth-for-continuous-delivery",{"_path":1814,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1815,"content":1821,"config":1826,"_id":1828,"_type":13,"title":1829,"_source":15,"_file":1830,"_stem":1831,"_extension":18},"/en-us/blog/how-tomorrows-tech-affects-sw-dev",{"title":1816,"description":1817,"ogTitle":1816,"ogDescription":1817,"noIndex":6,"ogImage":1818,"ogUrl":1819,"ogSiteName":675,"ogType":676,"canonicalUrls":1819,"schema":1820},"What devs need to know about tomorrow’s tech today","From 5G to edge computing, microservices and more, cutting-edge technologies will be mainstream soon. We asked more than a dozen DevOps practitioners and analysts which technologies developers need to start to understand today.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681675/Blog/Hero%20Images/future-of-software-what-developers-need-to-know.png","https://about.gitlab.com/blog/how-tomorrows-tech-affects-sw-dev","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"What devs need to know about tomorrow’s tech today\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Valerie Silverthorne\"}],\n        \"datePublished\": \"2020-10-21\",\n      }",{"title":1816,"description":1817,"authors":1822,"heroImage":1818,"date":1823,"body":1824,"category":790,"tags":1825},[680],"2020-10-21","\n\n_This is part two of our four-part series on the future of software development. [Part one](/blog/software-developer-changing-role/) examines how the software developer role is changing. Part three looks at [the role artificial intelligence (AI) will play in software development](/blog/ai-in-software-development/), and part four tackles [how to future-proof your developer career](/blog/future-proof-your-developer-career/)._\n\nIf it feels like we’ve been talking about future tech like 5G and edge computing forever, we have. But they’re getting closer to reality which means they should be on a developer’s radar. We asked 14 DevOps practitioners, analysts and GitLab experts which technologies are most likely to have an impact on software development in the next three to five years. Here’s what they said.\n\n## Edge computing comes of age\n\nThe fast-growing Internet of Things (IoT) market – worth $212 billion in 2019 and projected to hit 1.6 trillion in 2025 [according to market research firm Statista](https://www.statista.com/statistics/976313/global-iot-market-size/) – means edge computing may be coming to your DevOps team sooner than you think. Edge computing will challenge developers to literally put processing power within the application (on the “edge,” in other words) rather than having to reach out to the cloud for computations.\n\nToday’s edge computing is largely confined to telecom companies, says [Carlos Eduardo Arango Gutierrez](https://www.linkedin.com/in/eduardo-arango/?originalSubdomain=co), a software engineer at Red Hat (and a [GitLab Hero](/community/heroes/)), but in three to five years he sees front end developers needing to get a handle on this. “Part of my work at RedHat now is a lot of IoT and edge computing and I think every Kubernetes developer today is going to need to be thinking about it,” he says. “Developers are going to need to be thinking about networking but also about new types of routers and hardware architectures to support this.”\n\n## 5G is happening\n\nDespite the immense hype, a 5G wireless network rollout is underway around the world (here’s [an interactive map](https://www.speedtest.net/ookla-5g-map)). Statista predicts between [20 and 50 million 5G connections](https://hackernoon.com/top-10-software-development-trends-for-2020-you-need-to-know-as293690) as soon as the end of next year. Even if that forecast is optimistic, 5G will shortly upend mobile application use as we know it, and thus mobile application development. Dramatically faster download and upload times will give developers the chance to create more-feature-rich applications with better user experiences including potentially both [augmented](https://www.fi.edu/what-is-augmented-reality) and [virtual reality](https://www.wired.com/story/wired-guide-to-virtual-reality/).\n\n## Really, it’s about networking\n\nThat’s all a long way of saying that these cutting edge technologies are going to require developers to understand how to tie them neatly together. “In the future it doesn’t matter if you’re going to be good at the front end and know languages like Go or Java,” Carlos says. “You’re going to need to understand everything about networking. That’s critical to the future.”\n\n## Hardware becomes a factor\n\nSoftware developers tend to take hardware for granted, and why not? Today one phone or laptop is very much like the other but in a few years that will no longer be true. “As the speed of connectivity continues to evolve and as we hit certain thresholds we need to think about how we design solutions to take advantage of that,” says [Rafael Garcia](https://www.linkedin.com/in/jrafaelgarcia/), director of digital services at insurance conglomerate Aflac. “When storage became cheap it changed how you designed solutions and now with connectivity and broadband you don’t have to be worried about size anymore,” he says.\n\nSize is one consideration but there are many others, Carlos adds. Developers must move past the “if it works on a laptop it works everywhere” model and realize the production clusters and the distributed systems will have entirely different requirements for everything from design to security. “In the future, software developers need to understand the world is not your laptop,” he says.\n\n## Code (or secrets), heal thyself\n\nThe idea of self-healing code is something every DevOps team can embrace and it’s something GitLab CEO [Sid Sijbrandij](/company/team/#sytses) sees as a viable possibility. As an early example of this Sid points to [Kubernetes custom resource definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) because they automatically know the state they should be in. “Viewed through a different lens it’s the same thing in technologies like [Vault](https://www.vaultproject.io),” he explains. “Instead of secrets in a company system lasting for years or months it has dynamic secrets that continually refresh. It’s self-healing for secrets.”\n\n## Microservices go mainstream\n\nYour DevOps team may not have jumped on the [microservices](/topics/microservices/) bandwagon yet – in our 2020 survey only 26% of respondents fully use them – but Sid says they’re key to the future. It will also be important to know how to manage them, he says. “The interactions between services are going to be important particularly when it comes to distributed systems. We’re going to need technology for tracing and troubleshooting services.”\n\n_Why isn’t AI on this list? It’s so critical to the future it will be covered in part three of this series._\n",[685,9,1347],{"slug":1827,"featured":6,"template":688},"how-tomorrows-tech-affects-sw-dev","content:en-us:blog:how-tomorrows-tech-affects-sw-dev.yml","How Tomorrows Tech Affects Sw Dev","en-us/blog/how-tomorrows-tech-affects-sw-dev.yml","en-us/blog/how-tomorrows-tech-affects-sw-dev",{"_path":1833,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1834,"content":1840,"config":1846,"_id":1848,"_type":13,"title":1849,"_source":15,"_file":1850,"_stem":1851,"_extension":18},"/en-us/blog/how-we-removed-all-502-errors-by-caring-about-pid-1-in-kubernetes",{"title":1835,"description":1836,"ogTitle":1835,"ogDescription":1836,"noIndex":6,"ogImage":1837,"ogUrl":1838,"ogSiteName":675,"ogType":676,"canonicalUrls":1838,"schema":1839},"How we reduced 502 errors by caring about PID 1 in Kubernetes","For every deploy, scale down event, or pod termination, users of GitLab's Pages service were experiencing 502 errors. This explains how we found the root cause and rolled out a fix for it.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749682305/Blog/Hero%20Images/KubeCon2022.jpg","https://about.gitlab.com/blog/how-we-removed-all-502-errors-by-caring-about-pid-1-in-kubernetes","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How we reduced 502 errors by caring about PID 1 in Kubernetes\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Steve Azzopardi\"}],\n        \"datePublished\": \"2022-05-17\",\n      }",{"title":1835,"description":1836,"authors":1841,"heroImage":1837,"date":1843,"body":1844,"category":683,"tags":1845},[1842],"Steve Azzopardi","2022-05-17","\n\n_This blog post and linked pages contain information related to upcoming products, features, and functionality. It is important to note that the information presented is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned in this blog post and linked pages are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc._\n\nOur [SRE on call](https://about.gitlab.com/handbook/engineering/infrastructure/incident-management/#engineer-on-call-eoc-responsibilities)\nwas getting paged daily that one of our\n[SLIs](https://www.youtube.com/watch?v=tEylFyxbDLE) was\nburning through our\n[SLOs](https://www.youtube.com/watch?v=tEylFyxbDLE) for the [GitLab\nPages](https://docs.gitlab.com/ee/user/project/pages/) service. It was\nintermittent and short-lived, but enough to cause user-facing impact which we\nweren't comfortable with. This turned into alert fatigue because there wasn't\nenough time for the SRE on call to investigate the issue and it wasn't\nactionable since it recovered on its own.\n\nWe decided to open up an [investigation issue](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497)\nfor these alerts. We had to find out what the issue was since we were\nshowing `502` errors to our users and we needed a\n[DRI](https://about.gitlab.com/handbook/people-group/directly-responsible-individuals/)\nthat wasn't on call to investigate.\n\n## What is even going on?\n\nAs an [SRE](https://handbook.gitlab.com/job-families/engineering/infrastructure/site-reliability-engineer/)\nat GitLab, you get to touch a lot of services that you didn't build yourself and\ninteract with system dependencies that you might have not touched before.\nThere's always detective work to do!\n\nWhen we looked at the GitLab Pages logs we found that it's always returning\n[`ErrDomainDoesNotExist`](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/e1f1effa23c520d3b8b717d831ccab7ba3dd494f/internal/routing/middleware.go#L22-26)\nerrors which result in a `502` error to our users. GitLab Pages [sends a request](https://gitlab.com/gitlab-org/gitlab-pages/-/blob/e1f1effa23c520d3b8b717d831ccab7ba3dd494f/internal/source/gitlab/client/client.go#L101-127)\nto [GitLab Workhorse](https://docs.gitlab.com/ee/development/workhorse/),\nspecifically the `/api/v4/internal/pages` route.\n\nGitLab Workhorse is a Go service in front of our Ruby on Rails monolith and\nit's deployed as a [sidecar](https://www.magalix.com/blog/the-sidecar-pattern)\ninside of the `webservice pod`, which runs Ruby on Rails using the `Puma` web\nserver.\n\nWe used the internal IP to correlate the GitLab Pages requests with GitLab Workhorse\ncontainers. We looked at multiple requests and found that all the 502 requests\nhad the following error attached to them: [`502 Bad Gateway with dial tcp 127.0.0.1:8080: connect: connection refused`](https://gitlab.com/gitlab-org/gitlab/-/blob/f64be48cc737f5d12c1c30f724af540a836dcc94/workhorse/internal/badgateway/roundtripper.go#L43).\nThis means that GitLab Workhorse couldn't connect to the Puma web server. So we\nneeded to go another layer deeper.\n\nThe Puma web server is what runs the Ruby on Rails monolith which has an\ninternal API endpoint but Puma was never getting these requests since it wasn't\nrunning. What this tells us is that Kubernetes kept our pod in the\n[service](https://kubernetes.io/docs/concepts/services-networking/service/)\neven when Puma wasn't responding, despite having [readiness probes](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/4bb638bccc6a676f9fdd5bbf800f7d2b977efd55/charts/gitlab/charts/webservice/templates/deployment.yaml#L279-287)\nconfigured.\n\nBelow is the request flow between GitLab Pages, GitLab Workhorse, and Puma/Webservice to try and make it more clear:\n\n![overview of the request flow](https://about.gitlab.com/images/blogimages/how-we-removed-all-502-errors-by-caring-about-pid-1-in-kubernetes/overview.png){: .shadow.center}\n\n## Attempt 1: Red herring\n\nWe shifted our focus on GitLab Workhorse and Puma to try and understand how\nGitLab Workhorse was returning 502 errors in the first place. We found some\n`502 Bad Gateway with dial tcp 127.0.0.1:8080: connect: connection refused`\nerrors during container startup time. How could this be? With the readiness\nprobe, the pod shouldn't be added to the\n[Endpoint](https://kubernetes.io/docs/concepts/services-networking/service/#over-capacity-endpoints)\nuntil [all readiness probes pass](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_899321775).\nWe later found out that it's because of a [polling\nmechanisim](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_899629314)\nthat we have for [Geo](https://docs.gitlab.com/ee/administration/geo/) which\nruns in the background, using a Goroutine in GitLab Workhorse, and pings Puma for Geo information.\nWe don't have Geo enabled on GitLab.com so we [simply disabled it](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/1670)\nto reduce the noise.\n\nWe removed the 502 errors, but not the ones we want, just a red herring.\n\n## Attempt 2: Close but not quite\n\nAt this time, we were still burning through our SLO from time to time, so this\nwas still an urgent thing that we needed to fix. Now that we had cleaner logs for\n`502` errors it started to become a bit clearer that this is happening on pod\ntermination:\n\n```\n2022-04-05 06:03:49.000 UTC: Readiness probe failed\n2022-04-05 06:03:51.000 UTC: Puma (127.0.0.1:8080) started shutdown.\n2022-04-05 06:04:04.526 UTC: Puma shutdown finished.\n2022-04-05 06:04:04.000 UTC - 2022-04-05 06:04:46.000 UTC: workhorse started serving 502 constantly.  42 seconds of serving 502 requests for any request that comes in apart from /api/v4/jobs/request\n```\n\nIn the timeline shown above, we see that we've kept serving requests well after\nour `Puma`/`webservice` container exited, and the first readiness probe failed.\nIf we look at the readiness probes we had on that pod we see the following:\n\n```\n$ kubectl -n gitlab get po gitlab-webservice-api-785cb54bbd-xpln2 -o jsonpath='{range .spec.containers[*]} {@.name}{\":\\n\\tliveness:\"} {@.livenessProbe} {\"\\n\\treadiness:\"} {@.readinessProbe} {\"\\n\"} {end}'\n webservice:\n        liveness: {\"failureThreshold\":3,\"httpGet\":{\"path\":\"/-/liveness\",\"port\":8080,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":20,\"periodSeconds\":60,\"successThreshold\":1,\"timeoutSeconds\":30}\n        readiness: {\"failureThreshold\":3,\"httpGet\":{\"path\":\"/-/readiness\",\"port\":8080,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":60,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":2}\n  gitlab-workhorse:\n        liveness: {\"exec\":{\"command\":[\"/scripts/healthcheck\"]},\"failureThreshold\":3,\"initialDelaySeconds\":20,\"periodSeconds\":60,\"successThreshold\":1,\"timeoutSeconds\":30}\n        readiness: {\"exec\":{\"command\":[\"/scripts/healthcheck\"]},\"failureThreshold\":3,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":2}\n```\n\nThis meant that for the `webservice` pod to be marked unhealthy and removed\nfrom the endpoints, Kubernetes had to get 3 consecutive failures with an\ninterval of 10 seconds, so in total that's 30 seconds. That seems a bit slow.\n\nOur next logical step was to reduce the `periodSeconds` for the readiness probe\nfor the `webservice` pod so we don't wait 30 seconds before removing the pod\nfrom the service when it becomes unhealthy.\n\nBefore doing so we had to understand if sending more requests to `/-/readiness`\nendpoint would have any knock-on effect with using more memory or anything\nelse. We had to [understand what the `/-/readiness` endpoint was doing](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_903812722)\nand if it was safe to increase the frequency at which we send requests. We\ndecided it was safe, and after enabling it on\n[staging](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/1686#note_903877755),\nand\n[canary](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/1688#note_904501848)\nwe didn't see any increase in CPU/Memory usage, as expected, and saw an\nimprovement in the removal of 502 errors, which made us more confident that\nthis was the issue. We rolled this out to\n[production](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/1689)\nwith high hopes.\n\nAs usual, Production is a different story than Staging or Canary, and it showed\nthat it didn't remove all the 502 errors, just [enough to stop triggering the SLO every day](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_905993144),\nbut at least we removed the alert fatigue on the SRE on call. We were close, but not quite.\n\n## Attempt 3: All gone!\n\nAt this point, we were a bit lost and weren't sure what to look at next. We had\na bit of tunnel vision and kept focusing/blaming that we aren't removing the\nPod from the `Endpoint` quickly enough. We even looked at [Google Cloud Platform\nNEGs](https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg) to\nsee if we could have faster readiness probes and remove the pod quicker. However,\nthis wasn't ideal [because we wouldn't have solved this for our self-hosting customers](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_908359286)\nwhich seem to be facing the same [problem](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2943).\n\nWhile researching we also came across a known problem with [running `Puma` in\nKubernetes](https://github.com/puma/puma/blob/bf2548ce300c2b4f671582bc756dcec5861e815f/docs/kubernetes.md),\nand thought that might be the solution. However, we already implemented a\n[blackout window](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/c1b63f3a4867886bc1212d86985fc70e66b717c5/charts/gitlab/charts/webservice/templates/deployment.yaml#L223-224)\njust for this specific reason, so it couldn't be that either...in other words, it was another dead end.\n\nWe took a step back and looked at the [timelines one more time](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_910106152)\nand then it hit us. The Puma/webservice container is terminating within a\nfew seconds, but the GitLab Workhorse one is always taking 30 seconds. Is it because\nof the [long polling from GitLab Runner](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/21698)? 30 seconds\nis a \"special\" number for Kubernetes [pod termination](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).\nWhen Kubernetes deletes a pod it firsts sends the `TERM` signal to the\ncontainer and waits 30 seconds, if the container hasn't exited yet, it will\nsend a `KILL` signal. This indicated that maybe GitLab Workhorse was never\nshutting down and Kubernetes had to kill it.\n\nOnce more we looked at GitLab Workhorse source code and [searched for the `SIGTERM` usage](https://gitlab.com/gitlab-org/gitlab/-/blob/d66f10e169a08cedcbfe70e3ea46cbfbb20d972d/workhorse/main.go#L238-258)\nand it did seem to support [graceful termination](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/62701) and\nit also had explicit logic about long polling requests, so is this just another\ndead end? Luckily when the `TERM` signal is sent, Workhorse [logs a message that\nit's shutting down](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/62701). We looked\nat our logs for this specific message and didn't see anything. Is this it? We\naren't gracefully shutting down? But how? Why does it result in 502 errors?\nWhy do the GitLab Pages keep using the same pod that is terminating?\n\nWe know that the `TERM` signal is being sent to PID 1 inside of the container,\nand that process should handle the `TERM` signal for graceful shutdown. We\nlooked at the GitLab Workhorse process tree and this is what we found:\n\n```sh\ngit@gitlab-webservice-default-5d85b6854c-sbx2z:/$ ps faux\nUSER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND\nroot        1015  0.0  0.0 805036  4588 ?        Rsl  13:12   0:00 runc init\ngit         1005  0.3  0.0   5992  3784 pts/0    Ss   13:12   0:00 bash\ngit         1014  0.0  0.0   8592  3364 pts/0    R+   13:12   0:00  \\_ ps faux\ngit            1  0.0  0.0   2420   532 ?        Ss   12:52   0:00 /bin/sh -c /scripts/start-workhorse\ngit           16  0.0  0.0   5728  3408 ?        S    12:52   0:00 /bin/bash /scripts/start-workhorse\ngit           19  0.0  0.3 1328480 33080 ?       Sl   12:52   0:00  \\_ gitlab-workhorse -logFile stdout -logFormat json -listenAddr 0.0.0.0:8181 -documentRoot /srv/gitlab/public -secretPath /etc/gitlab/gitlab-workhorse/secret -config /srv/gitlab/config/workhorse-config.toml\n```\n\nBingo! `gitlab-workhorse` is PID 19 in this case, and a child process of a\n[script](https://gitlab.com/gitlab-org/build/CNG/-/blob/92d3e22e9ff6c5cbb685aeea99813751d5e19a9d/gitlab-workhorse/Dockerfile#L51)\nthat we invoke. Taking a close look at the\n[script](https://gitlab.com/gitlab-org/build/CNG/-/blob/92d3e22e9ff6c5cbb685aeea99813751d5e19a9d/gitlab-workhorse/scripts/start-workhors)\nwe check if it listens to `TERM` and it doesn't! So far everything indicated\nthat GitLab Workhorse was never getting the `TERM` signal which ended up in receiving\n`KILL` after 30 seconds. We updated our `scripts/start-workhorse` to use\n[`exec(1)`](https://linux.die.net/man/1/exec) so that `gitlab-workhorse`\nreplaced the PID of our bash script, that should have worked, right? When we tested\nthis locally we then saw the following process tree.\n\n```\ngit@gitlab-webservice-default-84c68fc9c9-xcsnm:/$ ps faux\nUSER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND\ngit          167  0.0  0.0   5992  3856 pts/0    Ss   14:27   0:00 bash\ngit          181  0.0  0.0   8592  3220 pts/0    R+   14:27   0:00  \\_ ps faux\ngit            1  0.0  0.0   2420   520 ?        Ss   14:24   0:00 /bin/sh -c /scripts/start-workhorse\ngit           17  0.0  0.3 1328228 32800 ?       Sl   14:24   0:00 gitlab-workhorse -logFile stdout -logFormat json -listenAddr 0.0.0.0:8181 -documentRoot /srv/gitlab/public -secretPath /etc/gitlab/gitlab-workhorse/secret -config /srv/gitlab/config/workhorse-config.toml\n```\n\nThis changed a bit: this shows that `gitlab-workhorse` was no longer a child\nprocess of `/scripts/start-workhorse` however `/bin/sh` was still PID 1. What is even\ninvoking `/bin/sh` that we didn't see anywhere in our\n[Dockerfile](https://gitlab.com/gitlab-org/build/CNG/-/blob/92d3e22e9ff6c5cbb685aeea99813751d5e19a9d/gitlab-workhorse/Dockerfile)?\nAfter some thumb-twiddling, we had an idea that the container runtime is invoking\n`/bin/sh`. We went back to basics and looked at the\n[`CMD`](https://docs.docker.com/engine/reference/builder/#cmd) documentation to\nsee if we were missing something, and we were. We read the following:\n\n> If you use the shell form of the CMD, then the \u003Ccommand> will execute in `/bin/sh -c`:\n>\n> ```\n> FROM ubuntu\n> CMD echo \"This is a test.\" | wc -\n> ```\n>\n> If you want to run your \u003Ccommand> without a shell then you must express the command as a JSON array and give the full path to the executable. This array form is the preferred format of CMD. Any additional parameters must be individually expressed as strings in the array:\n>\n> ```\n> FROM ubuntu\n> CMD [\"/usr/bin/wc\",\"--help\"]\n> ```\n\nThis was exactly [what we were doing](https://gitlab.com/gitlab-org/build/CNG/-/blob/92d3e22e9ff6c5cbb685aeea99813751d5e19a9d/gitlab-workhorse/Dockerfile#L51)! \nwe weren't using `CMD` in `exec form`, but in `shell form`. Changing this confirmed\nthat `gitlab-workhorse` is now PID 1, and also receives the termination signal\nafter testing it locally:\n\n```\ngit@gitlab-webservice-default-84c68fc9c9-lzwmp:/$ ps faux\nUSER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND\ngit           65  1.0  0.0   5992  3704 pts/0    Ss   15:25   0:00 bash\ngit           73  0.0  0.0   8592  3256 pts/0    R+   15:25   0:00  \\_ ps faux\ngit            1  0.2  0.3 1328228 32288 ?       Ssl  15:24   0:00 gitlab-workhorse -logFile stdout -logFormat json -listenAddr 0.0.0.0:8181 -documentRoot /srv/gitlab/public -secretPath /etc/gitlab/gitlab-workhorse/secret -config /srv/gitlab/config/workhorse-config.toml\n```\n\n```\n{\"level\":\"info\",\"msg\":\"shutdown initiated\",\"shutdown_timeout_s\":61,\"signal\":\"terminated\",\"time\":\"2022-04-13T15:27:57Z\"}\n{\"level\":\"info\",\"msg\":\"keywatcher: shutting down\",\"time\":\"2022-04-13T15:27:57Z\"}\n{\"error\":null,\"level\":\"fatal\",\"msg\":\"shutting down\",\"time\":\"2022-04-13T15:27:57Z\"}\n```\n\nOk, then we just needed to update `exec` and `CMD []` and we would have been\ndone, right? Almost. GitLab Workhorse proxies all of the requests for the API, Web, and Git requests so we couldn't just do a big change and expect that everything is going to be OK. We had to progressively roll this out to make\nsure we didn't break any existing working behavior since this affects all the\nrequests we get to GitLab.com. To do this, we hid it behind a [feature\nflag](https://gitlab.com/gitlab-org/build/CNG/-/merge_requests/972) so GitLab\nWorkhorse is only PID 1 when the `GITLAB_WORKHORSE_EXEC` environment variable\nis set. This allowed us to deploy the change and only enable it on a small part\nof our fleet to see if we see any problems. We were a bit more careful here and\nrolled it out zone by zone in Production since we run on 3 zones. When we\nrolled it out in the [first\nzone](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_919259030)\nwe saw all 502 errors disappear! After fully rolling this out we see that [the\nproblem is fixed and it had no negative side\neffects](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_920585707). Hurray!\n\nWe still had one question unanswered, why were GitLab Pages still trying to use\nthe same connection even after the Pod was removed from the Service because it was\nscheduled for deletion? When we looked at Go internals we see that [Go reuses\nTCP connections](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/15497#note_920642770)\nif we close the body of the request. So even though it's not part of the Service\nwe can still keep the TCP connection open and send requests – this explains why\nwe kept seeing 502 on pod being terminated and always from the same GitLab\nPages pod.\n\nNow it's all gone!\n\n## More things that we can explore\n\n1. We've made graceful termination for GitLab Workhorse as [default](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/-/merge_requests/1732).\n1. Audit all of our Dockerfiles that use `CMD command` and fix them. We've found 10, and [fixed all of them](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3249).\n1. [Better readiness Probe defaults for `webservice` pod](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2518).\n1. Add [linting](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/3253) for Dockerfiles.\n1. See if any of our child processes need [zombie process reaping](https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/).\n\n## Takeaways\n\n1. We should always care about what is PID 1 in a container.\n1. Always try and use `CMD [\"executable\",\"param1\",\"param2\"]` in your Dockerfile.\n1. Pods are removed from the Service/Endpoint in async.\n1. If you are on GKE [NEGs](https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg) might be better for readinessProbes.\n1. By default, there is a 30 second grace period between the `TERM` signal and the `KILL` signal when Pods terminate. You can update the time between the signals `terminationGracePeriodSeconds`.\n1. The Go `http.Client` will reuse the TCP connection if [the body is closed](https://cs.opensource.google/go/go/+/refs/tags/go1.18.2:src/net/http/response.go;l=59-64) which in this case made the issue worse.\n\nThank you to [@igorwwwwwwwwwwwwwwwwwwww](https://gitlab.com/igorwwwwwwwwwwwwwwwwwwww), [@gsgl](https://gitlab.com/gsgl), [@jarv](https://gitlab.com/jarv), and [@cmcfarland](https://gitlab.com/cmcfarland) for helping me debug this problem!\n\n",[9],{"slug":1847,"featured":6,"template":688},"how-we-removed-all-502-errors-by-caring-about-pid-1-in-kubernetes","content:en-us:blog:how-we-removed-all-502-errors-by-caring-about-pid-1-in-kubernetes.yml","How We Removed All 502 Errors By Caring About Pid 1 In Kubernetes","en-us/blog/how-we-removed-all-502-errors-by-caring-about-pid-1-in-kubernetes.yml","en-us/blog/how-we-removed-all-502-errors-by-caring-about-pid-1-in-kubernetes",{"_path":1853,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1854,"content":1860,"config":1866,"_id":1868,"_type":13,"title":1869,"_source":15,"_file":1870,"_stem":1871,"_extension":18},"/en-us/blog/improve-cd-workflows-helm-chart-registry",{"title":1855,"description":1856,"ogTitle":1855,"ogDescription":1856,"noIndex":6,"ogImage":1857,"ogUrl":1858,"ogSiteName":675,"ogType":676,"canonicalUrls":1858,"schema":1859},"Get started with GitLab's Helm Package Registry","Improve CD workflows and speed up application deployment using our new Helm Package Registry.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749668078/Blog/Hero%20Images/cover-image-helm-registry.jpg","https://about.gitlab.com/blog/improve-cd-workflows-helm-chart-registry","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Get started with GitLab's Helm Package Registry\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Philip Welz\"}],\n        \"datePublished\": \"2021-10-18\",\n      }",{"title":1855,"description":1856,"authors":1861,"heroImage":1857,"date":1863,"body":1864,"category":1747,"tags":1865},[1862],"Philip Welz","2021-10-18","In our 14.1 release, we offered the ability to add Helm charts to the GitLab\nPackage Registry. Here's everything you need to know to leverage application\ndeployment with these new features.\n\n\n## The role of container images\n\n\nThe de-facto standard is to package applications into [OCI\nImages](https://github.com/opencontainers/image-spec) which are often just\nreferred to as `container images` and more often as `Docker containers`. The\n[Open Container Initiative](https://opencontainers.org/) was launched in\n2015 by Docker and other companies to define industry standards around\ncontainer image formats and runtimes. GitLab introduced an OCI conform\n[Container Registry](/blog/gitlab-container-registry/) with the release of\n[GitLab 8.8](/releases/2016/05/22/gitlab-8-8-released/) in May 2016.\n\n\nToday, a common and widely adopted approach is to deploy applications with\n[Helm charts](https://helm.sh/) to [Kubernetes](https://kubernetes.io/).\nThis will be covered in this blog together with the feature release in\n[GitLab 14.1](/releases/2021/07/22/gitlab-14-1-released/) of adding Helm\nCharts to the [GitLab Package\nRegistry](https://docs.gitlab.com/ee/user/packages/package_registry/).\n\n\n### Install software to Kubernetes\n\n\nIn the DevOps era, [APIs](https://en.wikipedia.org/wiki/API) became\nincredibly popular, helping to drive demand for Kubernetes.\n\n\nThe core of Kubernetes' control plane is the API server. The API server\nexposes an HTTP REST API that lets end users, different parts of your\ncluster, and external components communicate with one another.\n\n\nTo interact with the API server we can use the command-line tool\n[kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) - although\nit would be also possible to use software development kits (SDKs) or any\nclient that understands REST like curl that was released 1997.\n\n\nBut which data format is best to use?\n\n\nModern APIs most likely use JSON. JSON is a human-readable format that\nprovides provide access to machine-readable data. Here is an example for\nKubernetes:\n\n\n```json\n\n{\n    \"kind\": \"Pod\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"name\": \"nginx\",\n        \"creationTimestamp\": null,\n        \"labels\": {\n            \"run\": \"nginx\"\n        }\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"name\": \"nginx\",\n                \"image\": \"nginx\",\n                \"resources\": {}\n            }\n        ],\n        \"restartPolicy\": \"Always\",\n        \"dnsPolicy\": \"ClusterFirst\"\n    },\n    \"status\": {}\n}\n\n```\n\n\nOne downside of JSON is that comments are not supported. That is one several\nreasons why YAML stepped in and took the spot as the de-facto language to\nuse for declarative configurations. The Kubernetes API transforms YAML to\nJSON behind the scenes. As you can easily convert back and forth between\nboth, YAML tends to be more user-friendly. Nginx example Pod in YAML:\n\n\n```yaml\n\napiVersion: v1\n\nkind: Pod\n\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  Containers:\n  # NOTE: If no tag is specified latest will be used\n  - image: nginx\n    name: nginx\n    # TODO\n    resources: {}\n  dnsPolicy: ClusterFirst\n  restartPolicy: Always\nstatus: {}\n\n```\n\n\nNow you are ready to save our YAML code in a file called `nginx.yaml` and\ndeploy it into Kubernetes:\n\n\n```shell\n\n$ kubectl apply --filename=nginx.yaml \n\n```\n\n\n### Create a Helm chart\n\n\nApplying YAML configuration files can get overwhelming, especially when\nneeding to deploy into several environments or wanting to version the\nmanifests. It is also cumbersome to maintain plain YAML files for more\ncomplex deployments which can easily extend to more than 1000 lines per\nfile.\n\n\nInstead, how about using a format that packages our applications and makes\nthem easily reproducible with templates? How about adding our own versioning\nscheme to this packaged application? How about deploying the same version\nwith a few lines of code to multiple environments? This all comes with Helm.\n\n\nTo create a Helm package you have to ensure that the Helm CLI is\n[installed](https://helm.sh/docs/intro/install/) on your system (example\nwith Homebrew on macOS: `brew install helm`).\n\n\n```shell\n\n$ helm create nginx \n\n```\n\n\nInspect the created Helm boilerplate files with `ls -lR` or `tree` on the\nCLI. This Helm chart can also be tested in a sandbox environment to verify\nit is operational.\n\n\n```shell\n\n.\n\n├── Chart.yaml\n\n├── charts\n\n├── templates\n\n│   ├── NOTES.txt\n\n│   ├── _helpers.tpl\n\n│   ├── deployment.yaml\n\n│   ├── hpa.yaml\n\n│   ├── ingress.yaml\n\n│   ├── service.yaml\n\n│   ├── serviceaccount.yaml\n\n│   └── tests\n\n│       └── test-connection.yaml\n\n└── values.yaml\n\n```\n\n\nNOTE: You can read more about the starter Chart\n[here](https://helm.sh/docs/chart_template_guide/getting_started/).\n\n\nKindly Helm creates a starter chart directory along with the common files\nand directories used in a chart with NGINX as an example. We again can\ninstall this into our Kubernetes cluster:\n\n\n```shell\n\n$ helm install nginx .\n\n```\n\n\n### Package Distribution\n\n\nThus far, we have learned that applications are packaged in containers and\nare installed using a Helm chart. Both methods require central distribution\nstorage that is publicly accessible, or accessible in your local network\nenvironment where the Kubernetes clusters are running.\n\n\nThe Helm documentation provides insights on [running your own Helm\nregistry](https://helm.sh/docs/topics/registries/), similar to hosting your\nown Docker container registry.\n\n\nWhat if we could avoid Do It Yourself DevOps and have both containers and\nHelm charts in one central DevOps platform? After maturing the [container\nregistry in\nGitLab](https://docs.gitlab.com/ee/user/packages/container_registry/),\ncommunity contributors helped add the [Helm chart\nregistry](https://docs.gitlab.com/ee/user/packages/helm_repository/index.html)\nin 14.1.\n\n\nBuilding the container image and Helm chart is part of the CI/CD pipeline\nstages and jobs. The missing bit is the automated production deployment\nusing Helm charts in your Kubernetes cluster.\n\n\nAn additional benefit in CI/CD is reusing the authentication mechanism, and\nworking in the same trust environment with security jobs before actually\nuploading and publishing any containers and charts.\n\n\n### Build the Helm Chart\n\n\n```shell\n\n$ helm package nginx \n\n```\n\n\nThe command creates a new tar.gz archive ready to upload. Before doing so,\nyou can inspect the archive with the `tar` command to verify its content.\n\n\n```shell\n\n$ tar ztf nginx-0.1.0.tgz\n\n\nnginx/Chart.yaml\n\nnginx/values.yaml\n\nnginx/templates/NOTES.txt\n\nnginx/templates/_helpers.tpl\n\nnginx/templates/deployment.yaml\n\nnginx/templates/hpa.yaml\n\nnginx/templates/ingress.yaml\n\nnginx/templates/service.yaml \n\nnginx/templates/serviceaccount.yaml\n\nnginx/templates/tests/test-connection.yaml\n\nnginx/.helmignore\n\n```\n\n\n### Push the Helm chart to the registry\n\n\nWith the [helm-push](https://github.com/chartmuseum/helm-push/#readme)\nplugin for Helm we can now upload the chart to the GitLab Helm Package\nRegistry:\n\n\n```shell\n\n$ helm repo add --username \u003Cusername> --password \u003Cpersonal_access_token>\n\u003CREGISTRY_NAME>\nhttps://gitlab.com/api/v4/projects/\u003Cproject_id>/packages/helm/stable\n\n$ helm push nginx-0.1.0.tgz nginx\n\n```\n\n\nThis step should be automated for a production-ready deployment with a\nGitLab CI/CD job.\n\n\n```yaml\n\ndefault:\n  image: dtzar/helm-kubectl\n  before_script:\n    - 'helm repo add --username gitlab-ci-token --password ${CI_JOB_TOKEN} ${CI_PROJECT_NAME} ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/helm/stable'\nstages:\n  - upload\nupload:\n  stage: upload\n  script:\n    - 'helm plugin install https://github.com/chartmuseum/helm-push.git'\n    - 'helm push ./charts/podtatoserver-0.1.0.tgz ${CI_PROJECT_NAME}'\n```\n\n\n### Install the Helm chart\n\n\nFirst, add the Helm chart registry to your local CLI configuration and test\nthe manual installation.\n\n\n```shell\n\n$ helm repo add --username \u003Cusername> --password \u003Cpersonal_access_token>\n\u003CREGISTRY_NAME>\nhttps://gitlab.com/api/v4/projects/\u003Cproject_id>/packages/helm/stable\n\n$ helm install --name nginx \u003CREGISTRY_NAME>/nginx\n\n```\n\n\nOnce it works, you can continue with adding an automated installation job\ninto the CI/CD pipeline.\n\n\n```yaml\n\ndefault:\n  image: alpine/helm\n  before_script:\n    - 'helm repo add --username gitlab-ci-token --password ${CI_JOB_TOKEN} ${CI_PROJECT_NAME} ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/helm/stable'\nstages:\n  - install\nupload:\n  stage: install\n  script:\n    - 'helm repo update'\n    - 'helm install --name nginx ${CI_PROJECT_NAME}/nginx'\n```\n\n\n### Complete your DevOps lifecycle\n\n\nYou can learn more about the newest GitLab registries for Helm and Terraform\nin this [#EveryoneCanContribute cafe\nsession](https://everyonecancontribute.com/post/2021-07-28-cafe-40-terraform-helm-gitlab-registry/)\nand inspect the [deployment\nrepository](https://gitlab.com/everyonecancontribute/kubernetes/civo-k3s).\n\n\nTry the Helm chart registry and share your workflows. Are there any features\nmissing to complete your DevOps lifecycle? Let us know [on\nDiscord](https://discord.gg/qgQWhD6wWV).\n\n\nCover image by [Joseph Barrientos](https://unsplash.com/@jbcreate_) on\n[Unsplash](https://unsplash.com/photos/eUMEWE-7Ewg)\n\n{: .note}\n",[685,815,9],{"slug":1867,"featured":6,"template":688},"improve-cd-workflows-helm-chart-registry","content:en-us:blog:improve-cd-workflows-helm-chart-registry.yml","Improve Cd Workflows Helm Chart Registry","en-us/blog/improve-cd-workflows-helm-chart-registry.yml","en-us/blog/improve-cd-workflows-helm-chart-registry",{"_path":1873,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1874,"content":1880,"config":1885,"_id":1887,"_type":13,"title":1888,"_source":15,"_file":1889,"_stem":1890,"_extension":18},"/en-us/blog/install-gitlab-one-click-gcp-marketplace",{"title":1875,"description":1876,"ogTitle":1875,"ogDescription":1876,"noIndex":6,"ogImage":1877,"ogUrl":1878,"ogSiteName":675,"ogType":676,"canonicalUrls":1878,"schema":1879},"Install GitLab with a single click from the new GCP Marketplace","GitLab is now available on the new Google Cloud Platform Marketplace, so you can deploy GitLab on Google Kubernetes Engine with a single click!","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749680061/Blog/Hero%20Images/gcp-send-gitlab-large.png","https://about.gitlab.com/blog/install-gitlab-one-click-gcp-marketplace","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Install GitLab with a single click from the new GCP Marketplace\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"William Chia\"}],\n        \"datePublished\": \"2018-07-18\",\n      }",{"title":1875,"description":1876,"authors":1881,"heroImage":1877,"date":1882,"body":1883,"category":300,"tags":1884},[1343],"2018-07-18","\nToday, Google Cloud announced its [new Google Cloud Platform (GCP) marketplace](https://cloudplatform.googleblog.com/2018/07/introducing-commercial-kubernetes-applications-in-gcp-marketplace.html) with the ability to deploy applications to your Kubernetes clusters on Google Kubernetes Engine (GKE). We’re proud to make GitLab available in the GCP Marketplace from day one. While you can [install GitLab almost anywhere](/install/), the new GCP Marketpklace app installs with just a single click. It's the easiest way to get your own self-managed GitLab instance up and running.\n\n![Deploy GitLab on Google Cloud Platform](https://about.gitlab.com/images/google-cloud-platform/gcp-send-gitlab-medium.png)\n\n### Not looking to manage your own instance?\n\nFolks who don’t want to take on the overhead of administering their own GitLab instance can [sign up for GitLab.com](https://gitlab.com/users/sign_in). GitLab.com is a SaaS offering that runs the same software as GitLab self-managed, managed by GitLab.\n\nRecently, we announced our [migration from Azure to GCP](/blog/moving-to-gcp/). This migration is the first step in our goal of running GitLab.com as a cloud native application on Kubernetes. The migration has involved careful planning along with decomposing GitLab into individual services. The lessons learned through our migration have translated directly into our how we are building the GitLab Helm Chart. The work we’ve done to migrate GitLab.com has fueled our ability to offer a solid option for self-managed users to deploy GitLab to Kubernetes.\n\n### Want to deploy your application to Kubernetes?\n\nWith a built-in container registry and [Kubernetes integration](/solutions/kubernetes/), GitLab makes it easier than ever to get started with containers and cloud native development. [Gitlab CI/CD](/topics/ci-cd/) can deploy your application to any Kubernetes cluster.\n\nIf you don’t have a Kubernetes cluster, we’ve got you covered. The easiest way to get set up in using our [GKE Integration](/partners/technology-partners/google-cloud-platform/) and [Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/). It takes just a few clicks to set up, then you have a full deployment pipeline. Just commit your code and GitLab does rest.\n\n![GitLab deploys your app to Google Cloud Platform](https://about.gitlab.com/images/google-cloud-platform/gitlab-send-app-medium.png)\n\n#### Join us at Google Next\n\nNext week on July 24-27 we’ll be at [Google Nex](https://cloud.withgoogle.com/next18/sf/)t in San Francisco, where there’s a lot going on. [Follow GitLab on Twitter](https://twitter.com/gitlab) to stay up to date on announcements from the show. If you’re at the show, stop by booth #S1629 and say hi! We’d love to hear how you are using GitLab and show you how our GKE Integration and Marketplace install work.  \n\n#### Summary\n\nYou can use GitLab either as a self-managed app or as a service on GitLab.com. Today, we’ve made it easier than ever to install [GitLab with the GCP Marketplace](https://console.cloud.google.com/marketplace/details/gitlab-public/gitlab?filter=solution-type:k8s). Additionally, we’ll be moving GitLab.com to GCP and soon afterward to GKE. You can look forward to the increased stability and performance that Kubernetes will bring to GitLab.com. Regardless of whether you are using self-managed GitLab or GitLab.com, GitLab’s Kubernetes integration and GKE integration make it easy to deploy your app to Kubernetes. Stop by Google Next and follow our Twitter feed to get the latest news on using GitLab together with Google Cloud Platform.\n",[727,1150,1149,9],{"slug":1886,"featured":6,"template":688},"install-gitlab-one-click-gcp-marketplace","content:en-us:blog:install-gitlab-one-click-gcp-marketplace.yml","Install Gitlab One Click Gcp Marketplace","en-us/blog/install-gitlab-one-click-gcp-marketplace.yml","en-us/blog/install-gitlab-one-click-gcp-marketplace",{"_path":1892,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1893,"content":1898,"config":1903,"_id":1905,"_type":13,"title":1906,"_source":15,"_file":1907,"_stem":1908,"_extension":18},"/en-us/blog/introducing-gitlab-serverless",{"title":1894,"description":1895,"ogTitle":1894,"ogDescription":1895,"noIndex":6,"ogImage":1641,"ogUrl":1896,"ogSiteName":675,"ogType":676,"canonicalUrls":1896,"schema":1897},"Announcing GitLab Serverless","The true value of serverless is best realized via a single-application DevOps experience – that's why we're launching GitLab Serverless.","https://about.gitlab.com/blog/introducing-gitlab-serverless","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Announcing GitLab Serverless\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Priyanka Sharma\"}],\n        \"datePublished\": \"2018-12-11\",\n      }",{"title":1894,"description":1895,"authors":1899,"heroImage":1641,"date":1900,"body":1901,"category":300,"tags":1902},[1082],"2018-12-11","\n\n[Serverless](/topics/serverless/) is the latest innovation in cloud computing that promises to alter the cost-benefit equation for enterprises. As our CEO, [Sid Sijbrandij](/company/team/#sytses) says, \"All roads lead to compute.\" There is a race among providers to acquire as many workloads from enterprises as possible, at the cheapest cost. The latter is where serverless comes in: serverless computing is an execution model in which the cloud provider acts as the server, dynamically managing the allocation of machine resources. Pricing is based on the actual resources consumed by an application, rather than on pre-purchased units of capacity.\n\nThis field began with the release of [AWS Lambda](https://en.wikipedia.org/wiki/AWS_Lambda) in November 2014. In the four short years since then, it has become a well-known workflow that enterprises are eager to adopt. Today, we are announcing [GitLab Serverless](/topics/serverless/) to enable our users to take advantage of the benefits of serverless.\n\n## GitLab Serverless is launching Dec. 22\n\nGitLab is the only single application for the entire [DevOps lifecycle](/topics/devops/). As part of that vision, we will release GitLab Serverless in GitLab 11.6, coming later this month, to allow enterprises to plan, build, and manage serverless workloads with the rest of their code from within the same GitLab UI. It leverages [Knative](https://cloud.google.com/knative/), which enables [autoscaling](https://en.wikipedia.org/wiki/Autoscaling) down to zero and back up to run serverless workloads on Kubernetes. This allows businesses to employ a multi-cloud strategy and leverage the value of serverless without being locked into a specific cloud provider.\n\nIn order to bring the best-in-class to our users, we partnered with [TriggerMesh](https://triggermesh.com/) founder [Sebastien Goasguen](https://twitter.com/sebgoa) and his team. Sebastien has been part of the serverless landscape since the beginning. He built a precursor to Knative, Kubeless. He is actively involved with the Knative community and understands the workflow from soup to nuts. Sebastien says, \"We are excited to help GitLab enable all their users to deploy functions directly on the Knative function-as-a-service clusters. We believe that these additions to GitLab will give those users the best possible experience for complete serverless computing from beginning to end.\"\n\n## \"Serverless first\"\n\nAs any attendees at [AWS re:Invent](/blog/aws-reinvent-recap/) would have noticed, the behemoth is putting all its energies behind serverless. We heard [stories from the likes of Trustpilot](https://www.computerworlduk.com/cloud-computing/how-trustpilot-takes-serverless-first-approach-engineering-with-aws-3688267/) about changing their engineering culture to \"serverless first.\" This is because serverless cloud providers save money by not having to keep idle machines provisioned and running, and are passing on the benefits to their customers. While this is amazing news, it is hard to truly embrace a workflow if it lives outside of developers' entrenched habits. GitLab has millions of users and is used by over 100,000 organizations, and with GitLab Serverless they can now enjoy the cost savings and elegant code design serverless brings, from the comfort of their established workflows.\n\nAs with all GitLab endeavors, making serverless multi-cloud and accessible to everyone is a big, hairy, audacious goal. Today, Knative can be installed to a Kubernetes cluster with a single click via the GitLab Kubernetes integration. It shipped in [GitLab 11.5](/releases/2018/11/22/gitlab-11-5-released/#easily-deploy-and-integrate-knative-with-gitlab).\n\n### How to activate GitLab Serverless\n\nStarting with the release of GitLab 11.6 on Dec. 22, the \"Serverless\" tab will be available for users as an alpha offering. Please do check it out and share your feedback with us.\n\n1. Go to your GitLab instance and pick your project of choice.\n2. Click on the `Operations` menu item in the sidebar.\n3. Pick `Serverless` to view the list of all the functions you have defined. You will also be able to see a brief description as well as the Knative cluster the function is deploying to.\n\n![Serverless list view](https://gitlab.com/gitlab-org/gitlab-ce/uploads/8b821d4aaa1bb75375dc54567a4313ad/CE-project__serverless-grouped.png \"Serverless list view\"){: .shadow.large.center}\n\nTo dig further, click into the function for more info.\n\n![function detail view](https://gitlab.com/gitlab-org/gitlab-ce/uploads/9e1e3893aa5369a2a165d1dd95c98dd8/CE-project__serverless--function-details.png \"function detail view\"){: .shadow.large.center}\n\nAll this goodness will be available Dec. 22. In the meantime, we would love to see you at [KubeCon Seattle](/events), where our product and engineering experts are attending to talk all things serverless with attendees. Hope to see you at booth S44!\n",[1004,685,984,232,9],{"slug":1904,"featured":6,"template":688},"introducing-gitlab-serverless","content:en-us:blog:introducing-gitlab-serverless.yml","Introducing Gitlab Serverless","en-us/blog/introducing-gitlab-serverless.yml","en-us/blog/introducing-gitlab-serverless",{"_path":1910,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1911,"content":1916,"config":1921,"_id":1923,"_type":13,"title":1924,"_source":15,"_file":1925,"_stem":1926,"_extension":18},"/en-us/blog/introducing-the-gitlab-kubernetes-agent",{"title":1912,"description":1913,"ogTitle":1912,"ogDescription":1913,"noIndex":6,"ogImage":1202,"ogUrl":1914,"ogSiteName":675,"ogType":676,"canonicalUrls":1914,"schema":1915},"Understand the new GitLab Agent for Kubernetes","Just released in 13.4, our brand new Kubernetes Agent provides a secure and K8s–friendly approach to integrating GitLab with your clusters.","https://about.gitlab.com/blog/introducing-the-gitlab-kubernetes-agent","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Understand the new GitLab Agent for Kubernetes\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2020-09-22\",\n      }",{"title":1912,"description":1913,"authors":1917,"heroImage":1202,"date":1918,"body":1919,"category":683,"tags":1920},[765],"2020-09-22","\n\nWe are happy to share the first iteration of the GitLab Agent for Kubernetes with our users and community. The Agent is the foundation for the next generation of the integration between GitLab and Kubernetes. \n\n## A bit of history of the GitLab Kubernetes Integrations\n\nGitLab's current Kubernetes integrations were introduced more than three years ago. Their primary goal was to allow a simple setup of clusters and provide a smooth deployment experience to our users. These integrations served us well in the past years but at the same time its weaknesses were limiting for some important and crucial use cases. The biggest weaknesses we see with the current integration are:\n\n- the requirement to open up the cluster to the internet, especially to GitLab\n- the need for cluster admin rights to get the benefit of GitLab Managed Clusters\n- exclusive support for push-based deployments that might not suit some highly regulated industries\n\nA few months ago, the Configure Team at GitLab started going in a new direction to come up with an integration that could address these weaknesses and provide a cloud native tie-in between GitLab and Kubernetes. This new direction is built on the GitLab Agent for Kubernetes, which we released in [GitLab 13.4](/releases/2020/09/22/gitlab-13-4-released/).\n\n## Design goals\n\nWhen we sat down to solve for the above weaknesses, we came up with a few principles that we are seeking to follow.\n\nWe want to be good cloud native citizens, and work together with the community, instead of reinventing the wheel.\n\nWe primarily want to serve expert Kubernetes platform engineers. While the current GitLab Managed Clusters and cluster creation from within GitLab might serve many use cases, it's primarily aimed at simple cluster setup and is not flexible enough to be the basis for production clusters. We want to change this approach, and are focusing on the needs of expert Kubernetes engineers first. We think that coming up with sane defaults will provide the necessary simplicity for new Kubernetes users as well.\n\nWe want to offer a secure solution that allows cluster operators to restrict GitLab's rights in the cluster and does not require opening up the cluster to the Internet.\n\n## The Agent\n\nFollowing the above goals, we've started to develop the GitLab Agent for Kubernetes. The Agent provides a permanent communication channel between GitLab and the cluster. To follow industry best practices for [GitOps](/topics/gitops/) it is configured by code, instead of a UI.\n\nThe current version of the Agent allows for pull-based deployments. Its deployment machinery is built on the [`gitops-engine`](https://github.com/argoproj/gitops-engine), a project initiated by ArgoCD and Flux where GitLab engineers are actively contributing as well.\n\n### Setting up the GitLab Agent\n\nThe Agent needs to be set up first. This requires a few actions from the user:\n\n- create an Agent token for authentication with GitLab, and store it in your cluster as a secret\n- commit the necessary Agent configurations in one of your repositories\n- install the Agent to your cluster\n\n### Deployments with an Agent\n\nAs mentioned above, the Agent needs a configuration directory inside one of your repositories. This configuration describes the projects that the Agent syncs into your clusters. We call the synced projects the __manifest project__. The manifest project should contain Kubernetes manifest files. The __manifest project__ project might be either inside or separated from your application code.\n\nWe've set up a simple example that shows a __manifest project__ and an __application project__. In this example [GitLab CI/CD](/topics/ci-cd/) in the __application project__ is used to create a container image and update the __manifest project__. Then the Agent picks up the changes from the __manifest project__, and deploys the Kubernetes manifests stored there.\n\n### Limitations\n\nAs this is the initial release of the Agent, it has many known limitations. We don't support all the amazing features the previous GitLab Kubernetes integration does such as [Auto DevOps](https://docs.gitlab.com/ee/topics/autodevops/), deploy boards, GitLab Managed Apps, etc. To start in GitLab 13.4 we limited our focus to supporting pull-based deployment for Helm-based GitLab installations. \n\nFollowing the current release, we will be focusing on:\n\n- [shipping the GitLab Agent for Kubernetes as part of the Official Linux Package](https://gitlab.com/groups/gitlab-org/-/epics/3834)\n- [supporting the deployment of private repositories](https://gitlab.com/gitlab-org/gitlab/-/issues/220912)\n\n## Further plans for GitLab Kubernetes Integrations\n\nThe Agent opens up many new opportunities for GitLab's Kubernetes integrations. Having an active component allows us to provide all the GitLab functionalities in locked down clusters as well. We're currently looking into the following areas to support with the agent:\n\n- integrate cluster-side dynamic container scanning with GitLab\n- use GitLab as an authentication and authorization provider for Kubernetes clusters\n- offer linters and checks for Kubernetes best practices on deployed resources\n- proxy cluster services easily through GitLab\n\nYou can see all our plans in the [Agent epic](https://gitlab.com/groups/gitlab-org/-/epics/3329) where we invite you to give us feedback and about this direction. \n\nYou can view a demo of how to install and use the GitLab Agent below:\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://player.vimeo.com/video/505413162\" width=\"640\" height=\"480\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen>\u003C/iframe>\n\u003C/figure>\n",[9,813,727,1248],{"slug":1922,"featured":6,"template":688},"introducing-the-gitlab-kubernetes-agent","content:en-us:blog:introducing-the-gitlab-kubernetes-agent.yml","Introducing The Gitlab Kubernetes Agent","en-us/blog/introducing-the-gitlab-kubernetes-agent.yml","en-us/blog/introducing-the-gitlab-kubernetes-agent",{"_path":1928,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1929,"content":1935,"config":1940,"_id":1942,"_type":13,"title":1943,"_source":15,"_file":1944,"_stem":1945,"_extension":18},"/en-us/blog/is-serverless-the-end-of-ops",{"title":1930,"description":1931,"ogTitle":1930,"ogDescription":1931,"noIndex":6,"ogImage":1932,"ogUrl":1933,"ogSiteName":675,"ogType":676,"canonicalUrls":1933,"schema":1934},"Is serverless the end of ops?","What is Serverless architecture, what are the pros and cons of using it and where will it go in the future?","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749671845/Blog/Hero%20Images/serverless-ops-blog.jpg","https://about.gitlab.com/blog/is-serverless-the-end-of-ops","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Is serverless the end of ops?\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Chrissie Buchanan\"}],\n        \"datePublished\": \"2019-09-12\",\n      }",{"title":1930,"description":1931,"authors":1936,"heroImage":1932,"date":1937,"body":1938,"category":790,"tags":1939},[787],"2019-09-12","\nWe’re not playing tricks when we say [serverless](/topics/serverless/) isn’t actually serverless. It’s not that servers aren’t doing work, it’s just that _your_ servers aren’t necessarily having to do the work. In these exciting times of automation, not having to worry about servers seems pretty appealing.\n\n[Serverless architecture has an annual growth rate of over 700%](https://hackernoon.com/severe-truth-about-serverless-security-and-ways-to-mitigate-major-risks-cd3i3x6f) and shows no signs of slowing down. Its popularity is all due to the operational efficiency it promises. Instead of worrying about infrastructure, you can essentially outsource those responsibilities to your cloud provider. Once you specify the resources your code requires, the cloud provider provisions the servers and deploys. Even better, you only pay for what is used.\n\nThe dream of serverless computing is pretty simple: Developers deploy into infrastructures they don’t have to manage, set up, or maintain. Once they upload a simple cloud function it _just works_. Since organizations are only paying for what they use, this system is infinitely scalable, and because this is all managed by a cloud provider, they take over security as well.\n\nWith a serverless architecture carrying all of the ops load, what does that mean for sysadmins?\n\n## Serverless: The end of ops?\n\nServerless hype hasn’t been without skepticism. On the ops side of things, there has been some concern that serverless is trying to force ops out of the picture. A successful [DevOps team structure](/topics/devops/build-a-devops-team/) is all about dev and ops working together but, as we well know, there are some challenges to overcome. For one: dev and ops teams are incentivized by vastly different things. Development wants faster feature delivery, whereas operations wants stability and availability. These two goals contradict each other. With serverless bypassing ops altogether, it unintentionally reinforces the “ops as a barrier” trope.\n\nGetting to the point: No, serverless is not the end of ops as we know it. Ops looks after monitoring, security, networking, support, and the overall stability of a system. Serverless is just one way of managing systems, but it isn’t the only way. [The sysadmin is still happening – you’re just outsourcing it with serverless](https://martinfowler.com/articles/serverless.html), and that’s not necessarily a bad (or good) thing.\n\nEven with so many new technologies and methodologies out there – Kubernetes, serverless, containerization – the basics of computing remain the same. It’s only when we understand the fundamentals and commit to building reliable code that we can make the most of these new platforms.\n\n[In a recent interview with Google Staff Developer Advocate Kelsey Hightower](/blog/kubernetes-chat-with-kelsey-hightower/), one of the biggest challenges he mentions is the “all-or-nothing” approach. “Either I’m all serverless, or I’m all Kubernetes, or I’m all traditional infrastructure. That has never made sense in the history of computing.” Ultimately, you don’t have to choose: Pick the platforms that work best for the job. Monoliths are easy to build and run, and microservices and Kubernetes can help organizations scale faster. Serverless is just another tool that teams can use to keep innovating.\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/9OHNejqXOoo\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nVideo directed and produced by [Aricka Flowers](/company/team/#arickaflowers)\n{: .note}\n\n## Serverless pros and cons\n\nAs with any architecture, there are going to be some benefits and some disadvantages. It’s important to weigh the pros and cons carefully against your organization’s needs.\n\n### Less operational overhead\n\nThis is frequently listed as one of the biggest advantages of serverless. Security patches, server upgrades, and other maintenance are already taken care of, which can free up resources for more important things.\n\n### Scalability\n\nYou just upload a code/function and your cloud provider handles the rest. [Serverless allows as many functions to be run (in parallel, if necessary) as needed to continually service all incoming requests](https://hackernoon.com/what-is-serverless-architecture-what-are-its-pros-and-cons-cc4b804022e9). Or you can have serverless run an entire application (with frontend, backend, etc.) and still reap the benefits. Because you’re not boxed into a certain pricing structure or number of minutes, serverless can be infinitely scalable (in theory).\n\n### Less operating costs\n\nYou’re only using what you need and all costs are purely based on usage. Finances are dynamic, which is more representative of how companies actually operate.\n\nOne example of this concept is comparing a rideshare service to the costs of owning a vehicle. With a car, there are costs you pay regardless of usage (insurance, registration, car payment), there are costs you pay depending on the usage (gas, maintenance), and then there are additional costs tied to unforeseen circumstances (accidents, that pothole again). With a rideshare, you’re just paying to go from point A to point B – all car costs we listed previously are being taken care of by someone else.\n\n### Less control\n\nOften cited as the biggest con, what you gain in reduced operational costs, complexity, and engineering lead time comes with [increased vendor dependencies](https://martinfowler.com/articles/serverless.html) and less oversight. There has to be a lot of trust in the cloud vendor since you’ll be unable to manage the server yourself. Not having control of your system means that if errors happen, you’re reliant on someone else to fix them. In business, no one cares more about your problems than you do.\n\n### Potential security risks\n\nWhile cloud vendors will manage security for you, and are generally well equipped for that task, it’s the architecture of serverless itself that could introduce vulnerabilities into the system. The problem is especially true for serverless applications built on top of microservices, with independent pieces of software interacting through numerous APIs. Gartner warns that [APIs will become the major source of data breaches by 2022](https://www.gartner.com/doc/3834704/build-effective-api-security-strategy).\n\n### Unpredictable costs\n\nHow can we list costs as both a pro and a con? That’s mainly due to the elasticity serverless offers. Since everything is event-triggered, rather than paid up front, elasticity becomes a double-edged sword: You’re not paying for cloud usage you don’t need, but it being so easy to use means you may end up using more.\n\nFor another real-world example of this concept in action, let’s examine ketchup, mainly the introduction of the plastic squeeze bottle.\n\nHeinz ketchup had been served in the iconic glass bottles we all know and love since 1890, but in 1983 the Heinz corporation unveiled the squeezable plastic bottle to consumers. This was heralded as a huge innovation – consumers could squeeze more precisely, the bottles were unbreakable which reduced losses in shipment, and the ergonomic design made it perfect for hands of all sizes. After the introduction of the new squeezable bottle, [ketchup sales went up by 3.7% from the prior year](https://www.npr.org/sections/thesalt/2014/04/29/306911004/whats-the-secret-to-pouring-ketchup-know-your-physics). Why? Now that ketchup could be dispensed more easily, people used a lot more of it. Instead of tapping on a glass bottle hoping for a drop, the ketchup cup runneth over.\n\nWith serverless being so easy to use, it’s best to assume that developers will use it more than you expect.\n\n## Where are we on our serverless journey?\n\nSo much of the literature about serverless comes from the cloud providers themselves, so of course it focuses on the most idealized vision of what serverless can be. As a result, those in the ops community felt like they were being forced out, and organizations were too busy paying attention to the benefits to see the potential downsides.\n\nServerless opens up a lot of opportunities in DevOps, and offers a unique solution for many use cases. Does this mean that sysadmins everywhere will soon be out of a job? Probably not. Serverless is just another tool in the toolbox, and at GitLab we’re exploring how to help users leverage Knative and Kubernetes to define and manage serverless functions in GitLab. We’re also looking into how we can be even more multi-faceted. Some users want to work with a Kubernetes cluster, some want to push a serverless function into AWS Lambda. We can already help with monoliths and microservices, and we’re actively working on supporting serverless as well.\n\nInterested in joining the conversation for this category? Please join us in our [public epic](https://gitlab.com/groups/gitlab-org/-/epics/155) where we discuss this topic and we can answer any questions you might have. Everyone can contribute.\n\nPhoto by [Tomasz Frankowski](https://unsplash.com/@sunlifter?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\n{: .note}\n",[9,984,923],{"slug":1941,"featured":6,"template":688},"is-serverless-the-end-of-ops","content:en-us:blog:is-serverless-the-end-of-ops.yml","Is Serverless The End Of Ops","en-us/blog/is-serverless-the-end-of-ops.yml","en-us/blog/is-serverless-the-end-of-ops",{"_path":1947,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1948,"content":1953,"config":1959,"_id":1961,"_type":13,"title":1962,"_source":15,"_file":1963,"_stem":1964,"_extension":18},"/en-us/blog/kubecon-na-2019-are-you-about-to-break-prod",{"title":1949,"description":1950,"ogTitle":1949,"ogDescription":1950,"noIndex":6,"ogImage":1202,"ogUrl":1951,"ogSiteName":675,"ogType":676,"canonicalUrls":1951,"schema":1952},"KubeCon NA: Are you about to break Prod?","Use Pulumi and GitLab to build a pipeline that validates your application, infrastructure, and deployment process.","https://about.gitlab.com/blog/kubecon-na-2019-are-you-about-to-break-prod","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"KubeCon NA: Are you about to break Prod?\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Erin Krengel, Pulumi\"}],\n        \"datePublished\": \"2020-01-27\",\n      }",{"title":1949,"description":1950,"authors":1954,"heroImage":1202,"date":1956,"body":1957,"category":876,"tags":1958},[1955],"Erin Krengel, Pulumi","2020-01-27","\n\nA couple of months ago, my [Pulumi](https://www.pulumi.com/) colleague Sean Holung, staff sofware engineer, and I had the opportunity to present [\"Are you about to break prod? Acceptance Testing with Ephemeral Environments\"](https://www.youtube.com/watch?v=jAQhDZiRzBQ) at KubeCon NA 2019. In this talk, we covered what is an ephemeral environment, how to create one, and then we walked the audience through a concrete example. Given our limited time, we had to move quickly through a ton of information. This post will recap our presentation and add a few more details we weren't able to cover.\n\nAs software engineers, our job is to deliver business value. To do this, we need to be delivering software both quickly and reliably.\n\nSo the question we ask you is: are you about to break prod? Everyone will break production at some point because there are things we miss. As independent software lead Alexandra Johnson sums up so well in a tweet: \"Failures are part of the cost of building and shipping large systems.\" Building a robust pipeline allows us to move quickly in the case of failure and gain confidence around making changes to our infrastructure and applications.\n\n{::options parse_block_html=\"false\" /}\n\n\u003Cdiv class=\"center\">\n\n\u003Cblockquote class=\"twitter-tweet\">\u003Cp lang=\"en\" dir=\"ltr\">Big takeaway from \u003Ca href=\"https://twitter.com/hashtag/KubeCon?src=hash&amp;ref_src=twsrc%5Etfw\">#KubeCon\u003C/a>: none of us want to break prod, but failures are part of the cost of building and shipping large systems. Using tools like \u003Ca href=\"https://twitter.com/hashtag/AcceptanceTesting?src=hash&amp;ref_src=twsrc%5Etfw\">#AcceptanceTesting\u003C/a> (\u003Ca href=\"https://twitter.com/eckrengel?ref_src=twsrc%5Etfw\">@eckrengel\u003C/a>) and \u003Ca href=\"https://twitter.com/hashtag/ChaosEngineering?src=hash&amp;ref_src=twsrc%5Etfw\">#ChaosEngineering\u003C/a> (\u003Ca href=\"https://twitter.com/Ana_M_Medina?ref_src=twsrc%5Etfw\">@Ana_M_Medina\u003C/a>) can increase your confidence in your infrastructure changes!\u003C/p>&mdash; Alexandra Johnson (@alexandraj777) \u003Ca href=\"https://twitter.com/alexandraj777/status/1198373475049623552?ref_src=twsrc%5Etfw\">November 23, 2019\u003C/a>\u003C/blockquote> \u003Cscript async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\">\u003C/script>\n\n\u003C/div>\n\nWith this in mind, we use Pulumi and GitLab to build a pipeline that validates both our application, infrastructure, and deployment process. \n\n## Ephemeral environments\n\nWhat is an ephemeral environment? It is a short-lived environment that mimics a production environment. To maintain agility, boundaries are defined in the environment to only encompass the first-level dependencies of the particular microservice that is being deployed. It means you don't have to spin up every single microservice or piece of infrastructure that's running in production. Yet you may need to spin up extra pieces of infrastructure to properly test the microservice. For example, you may need to create a subscription to pull from a PubSub topic your microservice writes to. This subscription would allow your acceptance tests to pull from a topic in order to validate an outbound message is published.\n\n## Why this is important\n\nInfrastructure is a key part of an application's behavior. The architecture and requirements are continually evolving. How can you incorporate these into a testing suite to give us a high degree of confidence?\n\nEphemeral environments allow you to integrate infrastructure and deployment processes into a testing suite. They ensure your testing environment is always in-sync with production and therefore allow you to iterate quickly to meet new requirements.\n\nEphemeral environments also encourage you to lean on automated tests over manual tests. If you use ephemeral environments as a replacement for a testing environment, there is not enough time to go in and run a manual check. Shifting your mindset to automated tests can be challenging, yet it's imperative that we do so. Automated tests guarantee your application behaves as expected today as well as months from now when you're out on vacation.\n\n## Our demo application\n\nTo demonstrate the effectiveness of integrating acceptance testing with ephemeral environments into your deployment process, we created a simple demo application. The service is written in Go and accepts a message on the `/message` endpoint, then places it in a storage bucket and sends a notification about the new object on a PubSub topic. The code for this application lives in our [main.go](https://gitlab.com/rocore/demo-app/blob/master/main.go) file. While you can walk through this code yourself, the most important thing to call out is that our application is *configurable*. This means we take configuration in at the very beginning of our main function and shut down the application if the values are not present.\n\n```go\nfunc main() {\n    ...\n\t// Get configuration from environment variables. These are\n\t// required configuration values, so we use an helper\n\t// function get the values and exit if the value is not set.\n\tproject := getConfigurationValue(\"PROJECT\")\n\ttopicName := getConfigurationValue(\"TOPIC\")\n\tbucketName := getConfigurationValue(\"BUCKET\")\n    ...\n}\n\nfunc getConfigurationValue(envVar string) string {\n\tvalue := os.Getenv(envVar)\n\tif value == \"\" {\n\t\tlog.Fatalf(\"%s not set\", envVar)\n\t}\n\tlog.Printf(\"%s: %s\", envVar, value)\n\treturn value\n}\n```\n\n### Infrastructure\n\nThere are many pieces of infrastructure to spin up and we can use Pulumi to easily wire it all together. Our architecture looks like this:\n\n![Pulumi Architecture](https://about.gitlab.com/images/blogimages/pulumidemoarch.jpg){: .medium.center}\n\nYou can check out the Pulumi code that we use to reproduce both our ephemeral environments as well as production in the [infrastructure/index.ts](https://gitlab.com/rocore/demo-app/blob/master/infrastructure/index.ts) file. The neat thing about using Pulumi is that we can create the Google Cloud Platform (GCP) resources we need and then directly reference them in our Kubernetes deployment. Using Pulumi ensures we're always configuring our application with the correct GCP resources for that environment.\n\nFor example, in our Kubernetes deployment, we set the environment variables by using the topic and bucket variables created just above.\n\n```typescript\n// Create a K8s Deployment for our application.\nconst appLabels = { appClass: name };\nconst deployment = new k8s.apps.v1.Deployment(name, {\n    metadata: { labels: appLabels },\n    spec: {\n        selector: { matchLabels: appLabels },\n        template: {\n            metadata: { labels: appLabels },\n            spec: {\n                containers: [{\n                    ...\n                    env: [\n                        { name: \"TOPIC\", value: topic.name }, // referencing topic just created\n                        { name: \"BUCKET\", value: bucket.name }, // referencing bucket just created\n                        { name: \"PROJECT\", value: project },\n                        {\n                            name: \"GOOGLE_APPLICATION_CREDENTIALS\",\n                            value: \"/var/secrets/google/key.json\"\n                        },\n                    ],\n                    ...\n                }]\n            }\n        }\n    },\n});\n```\n\n### Acceptance tests\n\nThe acceptance tests validate that our service, when stood up, functions as expected. They are run against an ephemeral environment. The tests live in the `acceptance/acceptance_test.go` [file](https://gitlab.com/rocore/demo-app/blob/master/acceptance/acceptance_test.go). You'll notice we're once again using the helper function `getConfigurationValue`. Our acceptance test must also be configured to ensure they're validating against the correct resources for that particular ephemeral environment.\n\nSince the service is only accessible from within the Kubernetes cluster, we use a Kubernetes job to run our acceptance tests. Using a Kubernetes job is a good technique to use when your CI is running externally, such as from GitLab, and you do not want to expose your service publicly. Our ephemeral environment plus acceptance test looks like this:\n\n![Acceptance Tests](https://about.gitlab.com/images/blogimages/pulumiacceptancetestarch.jpg){: .medium.center}\n \nWe spin up a Kubernetes Job and additional resources by using an if statement at the bottom of our `infrastructure/index.ts` file. The conditional depends on the environment's name as follows:\n\n```typescript\n// If it's a test environment, set up acceptance tests.\nlet job: k8s.batch.v1.Job | undefined;\nif (ENV.startsWith(\"test\")) {\n    job = acceptance.setupAcceptanceTests({\n        ...\n    });\n}\n\n// Export the acceptance job name, so we can get the logs from our\n// acceptance tests.\nexport const acceptanceJobName = job ? job.metadata.name : \"unapplicable\";\n```\n\nThat covers all the major aspects of our application and infrastructure, and if you'd like to view the code in detail, it is available in our `demo-app` [GitLab repository](https://gitlab.com/rocore/demo-app).\n\n## Our pipeline\n\nWhen developing a new service, we must establish a solid deployment strategy upfront. We want to make sure we're building in quality from day one. As we develop the service, we can add acceptance tests for every feature we add while the context and requirements are still fresh in our minds. This ensures we have thorough coverage of our app's functionality.\n\nWe used GitLab to set up our pipeline. We chose GitLab because it's straightforward to set up and allows us to run our pipeline on our Docker image of choice. We use a [base-image](https://gitlab.com/rocore/global-infra/blob/master/base-image/Dockerfile) that has all our dependencies installed and then reference that Docker image and tag in our `demo-app` pipeline. The Docker image allows us to bundle and version the dependencies for building our application and infrastructure.\n\n![GitLab Pipelines](https://about.gitlab.com/images/blogimages/pulumibloggitlabci.png){: .shadow.medium.center}\n \n1. **Test and Build** - This runs our unit tests and builds both our application and acceptance test images. To build our images, we used [Kaniko](https://github.com/GoogleContainerTools/kaniko), a tool for building images within a container or Kubernetes cluster. GitLab has excellent documentation on [how to incorporate Kaniko](https://docs.gitlab.com/ee/ci/docker/using_kaniko.html) into your pipeline. The application image is an immutable image that is used for both running our acceptance tests and deploying to production.\n1. **Acceptance Test** - This is what spins up our ephemeral environments and runs our acceptance tests. This acts as a quality gate catching issues before production.\n\n    Our ephemeral environment and Kubernetes job are all spun up in the `script` portion of the acceptance test job definition. We do a bit of setup for our new acceptance test stack and then run `pulumi up`. Here is the print out from our acceptance tests.\n\n    ```bash\n    ...\n    $ pulumi stack init rocore/$ENV-app\n    Logging in using access token from PULUMI_ACCESS_TOKEN\n    Created stack 'rocore/test-96425413-app'\n    $ pulumi config set DOCKER_TAG $DOCKER_TAG\n    $ pulumi config set ENV $ENV\n    $ pulumi config set gcp:project rocore-k8s\n    $ pulumi config set gcp:zone us-west1-a\n    $ pulumi up --skip-preview\n    Updating (rocore/test-96425413-app):\n    ...\n    Resources:\n        + 16 created\n\n    Duration: 4m10s\n\n    Permalink: https://app.pulumi.com/rocore/demo-app/test-96425413-app/updates/1\n    ```\n\n    The `after_script` destroys our stack as well as prints the logs of both our Kubernetes job and deployment, which help with debugging if our tests were to fail. We use the `after_script` to make sure that we always clean up and print logs even when our acceptance tests fail.\n    \n    ```bash\n    ...\n    $ pulumi stack select rocore/$ENV-app\n    $ kubectl logs -n rocore --selector=appClass=$ENV-demo-app-acc-test --tail=200\n    === RUN   TestSimpleHappyPath\n    === RUN   TestSimpleHappyPath/message_is_sent_to_PubSub_topic\n    === RUN   TestSimpleHappyPath/message_is_stored_in_bucket\n    ",[902,9,835,108,232,278],{"slug":1960,"featured":6,"template":688},"kubecon-na-2019-are-you-about-to-break-prod","content:en-us:blog:kubecon-na-2019-are-you-about-to-break-prod.yml","Kubecon Na 2019 Are You About To Break Prod","en-us/blog/kubecon-na-2019-are-you-about-to-break-prod.yml","en-us/blog/kubecon-na-2019-are-you-about-to-break-prod",{"_path":1966,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1967,"content":1973,"config":1978,"_id":1980,"_type":13,"title":1981,"_source":15,"_file":1982,"_stem":1983,"_extension":18},"/en-us/blog/kubernetes-101",{"title":1968,"description":1969,"ogTitle":1968,"ogDescription":1969,"noIndex":6,"ogImage":1970,"ogUrl":1971,"ogSiteName":675,"ogType":676,"canonicalUrls":1971,"schema":1972},"Getting Started with Kubernetes","Pods, nodes, clusters – oh my! Get the lowdown on Kubernetes from Brendan O'Leary's talk at Contribute 2019.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678474/Blog/Hero%20Images/clouds_kubernetes101.jpg","https://about.gitlab.com/blog/kubernetes-101","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Getting Started with Kubernetes\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Sara Kassabian\"}],\n        \"datePublished\": \"2019-10-24\",\n      }",{"title":1968,"description":1969,"authors":1974,"heroImage":1970,"date":1975,"body":1976,"category":876,"tags":1977},[703],"2019-10-24","\nKube-uh-not-a-clue?\n\nIt's the most common response to anyone who hears the term “Kubernetes” for the first time. If Kubernetes is, quite literally, Greek to you, then this blog post and [the corresponding video](https://www.youtube.com/watch?v=rq4GZ_GybN8) are two good places to start.\n\nWhile at [Contribute 2019](/blog/how-we-scaled-our-summits/), senior solutions manager [Brendan O’Leary](/company/team/#brendan) gave a presentation explaining the nuts and bolts of Kubernetes and how we use this open source tool at GitLab.\n\n## What is Kubernetes?\n\n“[Kubernetes](https://kubernetes.io/) is an open source system for automating deployment, scaling, and management of containerized applications,” according to the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/).\n\n>“You'll hear [Kubernetes] called a lot of different things. You'll hear people say, ‘Well, it's a container scheduler.’ You'll hear people say it's a desired state manager. You'll hear people say it's an orchestrator,” says Brendan. “All these things just basically boil down to it's _a system that keeps the system how we want it to be_, which sounds kind of crazy. But when we're a software company for software companies, you can relate a little bit.”\n\nSo a system that keeps the system how we want it to be, what does that mean exactly?\n\nTo understand what Kubernetes is and what it does, it’s best to dig a bit deeper into the origin story of this technology.\n\nThe journey to Kubernetes started at Google, when infrastructure developers were searching for a way to deploy new applications on hundreds of thousands of globally distributed servers. The result was Borg, a private tool developed by Google engineers for this purpose. The engineers iterated on Borg to launch Project Seven – an open source project that wasn’t entirely Borg, but took elements of Borg to produce version 1.0 of Kubernetes.\n\nKubernetes, which translates from Greek to “pilot,\" “helmsman,” or \"governor,\" is managed by CNCF, a foundation created by Google and Linux to house Kubernetes and other open source computing projects.\n\n## The benefits of Kubernetes\n\nKubernetes is highly portable across multiple cloud platforms and simplifies container management across however many of them are in use. Kubernetes make it easy to achieve greater scalability, flexibility, and productivity. \n\nAnother big benefit of Kubernetes is of course the fact that it’s open source – it’s continuously improved and updated so that there are minimal workflow interruptions.\n\n## What features does Kubernetes offer?\n\nKubernetes is one of the fastest growing open-source software projects around today. Here are some of the reasons why:\n\n* Deployments can be sent to one cloud or multiple cloud services without losing any application functionality or performance.\n\n* Kubernetes automation capabilities handle scheduling and deploying containers regardless of where it comes from (on-premise, cloud, or other). The automation also auto-scales up and down to increase efficiency and reduce waste and it creates new containers if dealing with a heavy workload.\n\n* Kubernetes allows for rolling back an application change if something goes wrong. \n\n* The open-source nature of Kubernetes lets users take advantage of a vast ecosystem of open-source tools.\n\n* The software is never outdated due to previously launched versions – it is always updating.\n\n## The role of containers\n\nContainers are a lightweight technology that lets you securely run an application and its dependencies without impacting other containers or your operating system (OS). This makes containers more nimble and scalable than using other tools for application management, like virtual machines (VMs) or bare metal. Like VMs, containers can repeat the application as it’s in development, but unlike VMs, the container does not duplicate the OS each time and instead shares the infrastructure, container technology (e.g., Docker), and OS with the host computer. Containers are lightweight and easier to run on the cloud because the OS is not duplicated along with the application, but container technology can be challenging to manage without a tool.\n\n“As you get more and more containers... it has a huge advantage technically, but it really creates a mess as to how are we managing all these containers,” says Brendan. “And there’s another problem – bare metal, virtual machines, containers, these all assume to some extent that you know what's going on with the computer that's running them.”\n\n![Evolution of containers](https://about.gitlab.com/images/blogimages/evolution_of_containers.png){: .shadow.medium.center}\nContainers make application deployment simpler, but containers are hard to orchestrate without a tool like Kubernetes.\n{: .note.text-center}\n\nBut orchestrating the various application deployments in containers is a level of abstraction that is difficult for the human mind to grapple with and is challenging to manage manually, which is where Kubernetes comes in.\n\n## Kubernetes as scheduler\n\nKubernetes is an open source container orchestrator that automates container management from deployment to scaling and operating.\n\nThere are a few key advantages to using Kubernetes, namely that the technology takes an extremely abstract method of application management – containers – and schedules the deployments to occur automatically.\n\nBrendan mentions other advantages to using Kubernetes, including that is can run routine health checks and is a very self-healing technology. A second key advantage to using Kubernetes for [DevOps](/topics/devops/) is that it is a declarative technology at its core. By using the [desired state manager](https://medium.com/@yannalbou/kubernetes-desired-state-4c5c4e873743), you can describe how you want your application to run and Kubernetes makes it happen.\n\n## Core Kubernetes concepts and definitions\n\n*   **Pod**: An abstraction that represents a group of one or more application containers. “The pod is just a unit that says these are the containers that represent the front end website, or these are the containers that represent the payment system,” explains Brendan.\n*   **Node**: A worker machine in Kubernetes that may be a VM or a physical machine (e.g., a computer), depending upon the cluster. The node often includes Docker, the pods (“group of containers”), and the VM or computer that includes the OS.\n*   **Cluster**: The highest level of abstraction in Kubernetes, it contains all the nodes, pods, and a **master** – which maintains the desired state of your application by orchestrating the nodes.\n*   **Service**: Defines a logical set of pods (e.g., “payment system”) and sets a policy about who can access them. “Pods come and go, but a service is forever,” or so the saying goes, Brendan says. “A pod is going to get scheduled into a node. But if that node went away, the fact that this pod is a member of this service means I've got to go find somewhere else to make a new pod that has this container running in it.” A service allows Kubernetes to route traffic to your application regardless of where the pod is running.\n\nThere are plenty of other buzzwords and phrases that are associated with Kubernetes, and Brendan dives into some of them in his presentation (captured in the video below). More concepts are explained on the [Kubernetes website at CNCF](https://kubernetes.io/docs/concepts/).\n\n## GitLab and Kubernetes\n\nThere are three key touchpoints between GitLab and Kubernetes:\n\n1. **GitLab is an application, so it can be run on Kubernetes.**\nIf a GitLab customer is already using a cloud native environment (i.e., containers and Kubernetes), then GitLab the application can be installed in that cloud native environment. We have already set-up [Helm Charts](https://docs.gitlab.com/charts/), which describe [how to install GitLab in a cloud native environment](https://docs.gitlab.com/charts/#installing-gitlab-using-the-helm-chart).\n2. **Customers that build their applications in GitLab using CI/CD can deploy to Kubernetes.**\nThe [Configure team at GitLab](/handbook/engineering/development/ops/configure/) works on the integration between GitLab and Kubernetes so developers can deploy their applications automatically to a Kubernetes cluster. [The GitLab and Kubernetes integration](https://docs.gitlab.com/ee/user/project/clusters/) allows customers to create and dismantle Kubernetes clusters, use review apps, run pipelines, deploy apps, view pod logs, detect and monitor Kubernetes, and much more. The Ops product teams at GitLab are always working to enhance the integration between Kubernetes and GitLab to make Auto DevOps faster and more efficient.\n3. **Moving our production system for GitLab.com to a Kubernetes cluster.**\nWe recently moved our giant GitLab.com application from Microsoft Azure [to a Google Cloud Platform](/blog/moving-to-gcp/). A key reason we changed platforms is that we wanted to move our GitLab.com project to a Kubernetes cluster. This project is ongoing but we are making major strides toward continuous deployment using Kubernetes.\n\n## Is Kubernetes easy to use?\n\nEveryone’s favorite answer: it depends. As with any software, there’s a learning curve to using Kubernetes (like having a basic understanding of how containers work). And it also may not be the right software fit for your needs. But if it is, adopting it doesn’t have to be complicated. \n\nKubernetes gives users the basic building blocks for creating developer projects while still allowing user flexibility where it’s needed. It can get a little more labor-intensive if users choose to build their own Kubernetes clusters rather than letting the service do it for them. But most companies don’t choose this route. \n\n## But wait, there’s more\n\nIf you have more questions, like why the heck Kubernetes is abbreviated to K8s, or are searching for more resources, you’re in luck. Brendan dives into more detail about some of the etymology, key concepts, vocabulary and even pop culture that shapes Kubernetes in his presentation. Watch the video below to learn more about how Kubernetes has been the impetus behind a major shift toward cloud native in the DevOps industry, and why we’re on the front lines of that change here at GitLab.\n\n### Watch\n\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/rq4GZ_GybN8\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\n### Supplemental reading\n\n[Kubernetes and the open source community](/blog/kubernetes-chat-with-joe-beda/): A conversation between GitLab CEO [Sid Sijbrandij](/company/team/#sytses) and the co-creator of Kubernetes, Joe Beda.\n\n[Kubernetes and the future of cloud native](/blog/kubernetes-chat-with-kelsey-hightower/): Sid chats with Kelsey Hightower, Google staff developer advocate about cloud native.\n\n[Kubernetes, containers, cloud native – the basics](/blog/containers-kubernetes-basics/): Get a quick overview of the key Kubernetes concepts.\n\n[Kubernetes + GitLab](/solutions/kubernetes/): Explore how GitLab and Kubernetes interact at various touchpoints.\n\n[Cover Photo](https://unsplash.com/photos/9BJRGlqoIUk) by [Pero Kalimero](https://unsplash.com/@pericakalimerica?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/search/photos/cloud?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\n{: .note}\n",[727,9],{"slug":1979,"featured":6,"template":688},"kubernetes-101","content:en-us:blog:kubernetes-101.yml","Kubernetes 101","en-us/blog/kubernetes-101.yml","en-us/blog/kubernetes-101",{"_path":1985,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":1986,"content":1992,"config":1997,"_id":1999,"_type":13,"title":2000,"_source":15,"_file":2001,"_stem":2002,"_extension":18},"/en-us/blog/kubernetes-and-multicloud",{"title":1987,"description":1988,"ogTitle":1987,"ogDescription":1988,"noIndex":6,"ogImage":1989,"ogUrl":1990,"ogSiteName":675,"ogType":676,"canonicalUrls":1990,"schema":1991},"How Kubernetes merges with multicloud & how to manage it","Google Cloud's Ian Chakeres and Tim Hockin discuss how Kubernetes reduces cloud noise and makes multicloud possible.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749681075/Blog/Hero%20Images/kubernetes-multicloud-blog.jpg","https://about.gitlab.com/blog/kubernetes-and-multicloud","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How Kubernetes merges with multicloud & how to manage it\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Chrissie Buchanan\"}],\n        \"datePublished\": \"2020-02-05\",\n      }",{"title":1987,"description":1988,"authors":1993,"heroImage":1989,"date":1994,"body":1995,"category":790,"tags":1996},[787],"2020-02-05","\n\nIn November 2019, we had the opportunity to co-host [MulticloudCon](https://multicloudcon.io/), a zero-day event with our partners at [Upbound](https://upbound.io/). The event featured experts in cloud, Kubernetes, database resources, CI/CD, security, and more to learn how multicloud is evolving and empowering developers and operations experts across the industry.\n\nIn this presentation from MulticloudCon, Google Cloud's [Ian Chakeres](http://www.ianchak.com/) and [Tim Hockin](https://twitter.com/thockin) cover the challenges of using multiple clouds, and how Kubernetes cuts through the cloud noise to provide consistency in workflows. Gartner predicts that by 2021, [over 75% of midsize and large organizations will have adopted a multicloud or hybrid IT strategy.](https://www.gartner.com/en/documents/3895580/predicts-2019-increasing-reliance-on-cloud-computing-tra)\n\nAs organizations continue to amp up their [multicloud](/topics/multicloud/) initiatives, they’ll need ways to manage the complexities and differences between multiple cloud environments. Kubernetes is perfectly built for this task because it creates the right abstractions so teams can utilize multiple clouds on a consistent platform.\n\n\n## Discussion highlights\n\n### The challenges of multiple clouds:\n\n> \"The hard thing about multiple clouds is the noise. There's so much that is different across clouds. To learn them to the _depth_ that you need to be able to develop and debug real applications on these clouds is really, really difficult. Networking capabilities across clouds, across environments, are incredibly different and varied. Storage, auto-scaling, life cycle management, all of these things that have a real, material impact on the way you develop your applications. It can be total chaos for your staff.\" – Tim Hockin, Software Engineer, Kubernetes, Anthos, and GKE\n\n\n### Why Kubernetes is built for multicloud:\n\n> \"Kubernetes is this platform that is [at a] high enough level that it hides most of those variances that we see across all the different clouds. But it's also [at a] low enough level that you can do anything that you need to, for your business and your developers. Kubernetes provides these abstractions that insulate your teams from some of the mess below, hiding that infrastructure complexity that's associated with multiple clouds.\" – Ian Chakeres, Engineering Manager, Anthos and GKE\n\n### How open source continues to improve Kubernetes and multicloud:\n\n> \"Not only can you build the platform for your teams, but there is this entire ecosystem of people who are out there, in Kubernetes, building things that can help you run your business. I went to look at the [CNCF](https://www.cncf.io/) page recently, just to look at all the different projects, and even just the graduated project list now fills your entire screen. There's this entire ecosystem that builds the infrastructure and the applications... they can fill in the gaps if there are any things that your business is running into. So Kubernetes is giving you this leverage as being a platform that actually spans all of those other clouds.\" – Ian Chakeres\n\n## Kubernetes and multicloud\n\nNetworking across environments, clouds, and clusters remains challenging. Organizations don’t want to train DevOps teams on multiple clouds, and even if they did, training teams on the intricacies and fine details for _every single cloud provider_ would be an exercise in futility. Tailoring deployments for each cloud is inefficient and time-consuming. Kubernetes provides the consistency teams need to work with multiple clouds by creating abstractions that bring all deployments into one environment. Even though there are many exciting things happening in open source around Kubernetes and multicloud, not every abstraction is leak-proof.\n\nIn a perfect multicloud, multi-cluster hybrid world, teams are working with multiple providers in a seamless environment that hides the underlying infrastructure. It’s still a little too early for multicloud and hybrid Kubernetes to make that \"perfect\" world a reality, but as multicloud technology continues to evolve, Kubernetes will continue to be at its core.\n\nTo learn more about how the team at Google is investing in Kubernetes and multicloud, watch the full presentation below.\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/ArQL05VZ18U\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nCover image by Francisco Delgado on [Unsplash](https://unsplash.com/s/photos/multi-cloud?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\n{: .note}\n",[727,9],{"slug":1998,"featured":6,"template":688},"kubernetes-and-multicloud","content:en-us:blog:kubernetes-and-multicloud.yml","Kubernetes And Multicloud","en-us/blog/kubernetes-and-multicloud.yml","en-us/blog/kubernetes-and-multicloud",{"_path":2004,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2005,"content":2011,"config":2016,"_id":2018,"_type":13,"title":2019,"_source":15,"_file":2020,"_stem":2021,"_extension":18},"/en-us/blog/kubernetes-chat-with-joe-beda",{"title":2006,"description":2007,"ogTitle":2006,"ogDescription":2007,"noIndex":6,"ogImage":2008,"ogUrl":2009,"ogSiteName":675,"ogType":676,"canonicalUrls":2009,"schema":2010},"Kubernetes and the open source community: We chat with Joe Beda","Our CEO sits down with Kubernetes co-creator Joe Beda to talk about the future of open source.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749680604/Blog/Hero%20Images/tech-explorers-cover.png","https://about.gitlab.com/blog/kubernetes-chat-with-joe-beda","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Kubernetes and the open source community: We chat with Joe Beda\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Chrissie Buchanan\"}],\n        \"datePublished\": \"2019-05-20\",\n      }",{"title":2006,"description":2007,"authors":2012,"heroImage":2008,"date":2013,"body":2014,"category":876,"tags":2015},[787],"2019-05-20","\n\nJoe Beda is the Principal Engineer at VMWare and co-creator of Kubernetes. Beda and Craig McLuckie’s Google project to build a container orchestration tool has exploded and Kubernetes is now a large, open source community with thousands actively contributing to the project thanks to the [Cloud Native Computing Foundation](https://cncf.io/). In the world of open source they don’t get much better than Joe Beda, which is why we were thrilled to speak with him as part of our TechExplorers series where we sit down with the industry’s tech leaders.\n\nJoe and GitLab CEO [Sid Sijbrandij](/company/team/#sytses) went over a variety of topics like cloud native, Kubernetes, the business of open source, and many others. What was most interesting, but not surprising, was the integral role the open source community had in the success of these projects.\n\n“I think open source is evolving… It’s never been something that’s sat still. One of the lessons from Kubernetes more than anything else is that open source today is about community, if not more than code,” Beda says. He admits that right now is a tumultuous time for open source, with the line between product and project getting blurred. The “business” of open source can sometimes alienate the community that supported these initiatives in the first place, something many leaders will have to navigate in the years ahead.\n\n“It’s like there’s the code and the license for the code, and then there’s the community that builds around it. And even if it’s not a legal contract, I think there’s a social contract between the leaders of an open source project and the people who are members of that community. And I think you have to be very respectful of that social contract.”\n\nOne of the most important things an open source project can do to maintain the trust of the community, according to Beda, is to be very explicit about its motivations from the beginning. At GitLab, we’ve taken this message to heart and have [our promises to the open source community](/company/stewardship/) public on our website.\n\nKubernetes has already made a major impact on the way we deploy applications, and users continue to contribute and add to the project. “I think I’m still blown away with just the diversity of the projects that are building on top of Kubernetes,” he says. Even with recent challenges, Beda’s encouraged at the innovation he continues to see in open source. It all boils down to buy-in from the community and giving them the tools to keep innovating. “I think this is part of the excitement... There is a really vibrant set of projects that are experimenting, trying things out. And it’s going to be the users who decide what’s successful here.”\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/6IlyxHFedpo\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nVideo directed and produced by [Aricka Flowers](/company/team/#arickaflowers)\n{: .note}\n\n\n## Takeaways\n\n\n### On the future of open source:\n\n>“I think open source is evolving… It’s never been something that’s sat still. One of the lessons from Kubernetes more than anything else is that open source today is about community, if not more than code.”\n\n\n### On building an open source company:\n\n>“My advice to anybody who is building a company around open source is to understand sort of where are your levers, where is the value that you’re adding, and try and be creative about finding ways to add value where something like this can’t happen.”\n\n\n### On the early days of Kubernetes:\n\n>“The real story is that there was a set of us that just wanted to be able to hack on some stuff and not have to go through all the process of shipping stuff to Google… But also we very much had the idea from the start that we wanted to build a community. We wanted to enable other people to own it, to be part of it, to really feel like they were instrumental in making it happen. And that’s what happened.”\n\n\n### On enterprise cloud adoption:\n\n>“I think that as we start to see these enterprises start to adopt cloud, understanding the power dynamics and the relationship with cloud, I think that there is a lot of concern about how do I get some independent advice, independent thought, independent support that’s going to actually stay with me as I figure out where my position lands as I move from on-prem to cloud and beyond.”\n\nWe’ll be at KubeCon Barcelona May 20 – 23, booth #S21. Learn how you can get started with GitLab and Kubernetes, and be sure to check out Joe Beda’s keynote on May 21.\n",[685,727,9],{"slug":2017,"featured":6,"template":688},"kubernetes-chat-with-joe-beda","content:en-us:blog:kubernetes-chat-with-joe-beda.yml","Kubernetes Chat With Joe Beda","en-us/blog/kubernetes-chat-with-joe-beda.yml","en-us/blog/kubernetes-chat-with-joe-beda",{"_path":2023,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2024,"content":2029,"config":2034,"_id":2036,"_type":13,"title":2037,"_source":15,"_file":2038,"_stem":2039,"_extension":18},"/en-us/blog/kubernetes-chat-with-kelsey-hightower",{"title":2025,"description":2026,"ogTitle":2025,"ogDescription":2026,"noIndex":6,"ogImage":2008,"ogUrl":2027,"ogSiteName":675,"ogType":676,"canonicalUrls":2027,"schema":2028},"Kubernetes and the future of cloud native: We chat with Kelsey Hightower","Our CEO sits down with Google Staff Developer Advocate Kelsey Hightower to talk fundamentals, the future of cloud native, and Kubernetes.","https://about.gitlab.com/blog/kubernetes-chat-with-kelsey-hightower","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Kubernetes and the future of cloud native: We chat with Kelsey Hightower\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Chrissie Buchanan\"}],\n        \"datePublished\": \"2019-05-13\",\n      }",{"title":2025,"description":2026,"authors":2030,"heroImage":2008,"date":2031,"body":2032,"category":876,"tags":2033},[787],"2019-05-13","\n\n[Kelsey Hightower](https://twitter.com/kelseyhightower) is a Staff Developer Advocate at Google, co-chair of KubeCon, the largest Kubernetes conference, and an avid open source technologist. Naturally, we couldn’t think of a better first subject for TechExplorers, a new blog series where we talk to the industry’s tech leaders.\n\nGitLab CEO [Sid Sijbrandij](/company/team/#sytses) sat down with Kelsey to talk about a variety of topics like cloud native, Kubernetes, infrastructure challenges, understanding new technology, and much more. One topic that came up again and again was fundamentals. Even with so many new technologies and methodologies out there – Kubernetes, [serverless](/topics/serverless/), cloud native – the basics of computing remain the same. It’s only when we understand the fundamentals and commit to building reliable code that we can make the most of these new platforms.\n\nOne of the biggest challenges Kelsey sees is the “all-or-nothing” approach. “Either I’m all serverless, or I’m all Kubernetes, or I’m all traditional infrastructure. That has never made sense in the history of computing,” he says. Ultimately, you don’t have to choose: Pick the platforms that work best for the job.\n\nGoing forward, Kelsey hopes that development continues to focus on high-level interfaces and hide the infrastructure underneath. Organizations want to have as little interaction with servers as possible. “That is what we’re trying to do. Anything more than that is noisy, and it’s kind of serving our own self-interest … We need those creative people not to be wasting time trying to build up a cloud platform before they can solve real problems.”\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/9OHNejqXOoo\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nVideo directed and produced by [Aricka Flowers](/company/team/#arickaflowers)\n{: .note}\n\n## Takeaways\n\n### On early Kubernetes:\n\n>\"... When it first came out, just based on my previous experience as a system administrator, this is the thing you’re trying to build all those years. So, when I saw it, I immediately knew this thing solves my problems. So, I think I kind of attacked it as a contributor first. And someone who wanted to teach other people what I saw in it. Not sure if it was ever going to blow up or not. But it definitely had the right footprint when it came out.\"\n\n### On teaching others:\n\n>\"I usually try to explain things based on the fundamentals, and then break down the technology until we get to the bottom. So, whenever something new comes out, my guess is it’s not going to change how we do computing. That hasn’t changed in a long time ... Once you learn the three, four, five basic fundamentals, then you just look at the new technology, and you just work your way down.\"\n\n### On invisible infrastructure:\n\n>\"Forever, people have tried to build a thing where most of the organization **doesn’t think about servers**. So whether you’re using Kubernetes, or virtualization for that matter, the whole goal is that if I check in code, there should be very little interaction with infrastructure to get that deployed to customers. To me, serverless is just a reminder to us that we should focus on a high-level interface and hide the various infrastructure underneath.\"\n\n### On adopting cloud native platforms:\n\n>\"If you take your app that you wrote 20 years ago and neglect it all this time, you don’t have any of those kind of controls, and you just move that app into the cloud native type of design patterns, it’s going to be worse than what you had before … People have to understand that there’s tradeoffs. You’re going to have to _write more reliable code_ if you expect to be able to adopt these platforms.\"\n\n## On monoliths:\n\n>\"There’s nothing wrong with monoliths, honestly. People have gotten themselves in a spot where they can’t really update the code. It’s messy. The codebase is all over the place. And if you take that same mentality to functions, you’re just going to have a mess of functions that are going to be all over the place and not even know how to call them.\n\n>\"_Discipline is required no matter what the platform is._ People think platform will absolve them from discipline.\"\n",[685,727,9],{"slug":2035,"featured":6,"template":688},"kubernetes-chat-with-kelsey-hightower","content:en-us:blog:kubernetes-chat-with-kelsey-hightower.yml","Kubernetes Chat With Kelsey Hightower","en-us/blog/kubernetes-chat-with-kelsey-hightower.yml","en-us/blog/kubernetes-chat-with-kelsey-hightower",{"_path":2041,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2042,"content":2048,"config":2053,"_id":2055,"_type":13,"title":2056,"_source":15,"_file":2057,"_stem":2058,"_extension":18},"/en-us/blog/kubernetes-kubecon-barcelona",{"title":2043,"description":2044,"ogTitle":2043,"ogDescription":2044,"noIndex":6,"ogImage":2045,"ogUrl":2046,"ogSiteName":675,"ogType":676,"canonicalUrls":2046,"schema":2047},"See you at KubeCon Barcelona!","We're excited to see you all in Barcelona! Visit us at booth S21.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749664107/Blog/Hero%20Images/tanuki-adventure.png","https://about.gitlab.com/blog/kubernetes-kubecon-barcelona","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"See you at KubeCon Barcelona!\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Priyanka Sharma\"}],\n        \"datePublished\": \"2019-05-17\",\n      }",{"title":2043,"description":2044,"authors":2049,"heroImage":2045,"date":2050,"body":2051,"category":876,"tags":2052},[1082],"2019-05-17","\nKubeCon is here again! I am very excited to go to Barcelona and meet (some of) the 12,000 attendees expected at the show. I’ve been part of KubeCon since the second event when there were 700 attendees. That year, we were a cozy community with about five projects, and Kubernetes was the newest game in town. Fast forward to today and I now serve on the board of the CNCF, Kubernetes is a stable technology, the foundation hosts 36 projects, and the latest of them to graduate will be Fluentd (after Kubernetes, Prometheus, CoreDNS, Envoy, and Containerd). I can’t quite reveal it yet, but there will be a very cool GitLab story intertwined with one of the projects that you will see for yourself soon :-).\n\n\u003Cscript type=\"text/javascript\" src=\"https://ssl.gstatic.com/trends_nrtr/1754_RC01/embed_loader.js\">\u003C/script> \u003Cscript type=\"text/javascript\"> trends.embed.renderExploreWidget(\"TIMESERIES\", {\"comparisonItem\":[{\"keyword\":\"kubernetes\",\"geo\":\"\",\"time\":\"today 5-y\"}],\"category\":0,\"property\":\"\"}, {\"exploreQuery\":\"date=today%205-y&q=kubernetes\",\"guestPath\":\"https://trends.google.com:443/trends/embed/\"}); \u003C/script>\n*\u003Csmall>Kubernetes growth over the past 5 years.\u003C/small>*\n\nAs some of you know, I joined GitLab after following the company and our CEO, Sid Sijbrandij, for a long time. Working at this dynamic company has been a ride of a lifetime. I am an open source person and one of the interesting things for me is how the [GitLab story](/company/history/) is similar to the Kubernetes story. GitLab started as an open source git provider because our co-founder, [Dmitriy \"DZ\" Zaphorozhets](/company/team/#dzaporozhets) didn’t like his options. Today, we have morphed into a [single application for the entire DevOps lifecycle](/stages-devops-lifecycle/). Similarly, Kubernetes comes from humble beginnings. In the words of Joe Beda, co-founder of Kubernetes, “there were a set of us that just wanted to be able to hack on some stuff and not have to go through all the process of shipping stuff to Google...it was more important for us to sort of reset the playing field between clouds. And so Kubernetes became a way for us to start doing that.”\n\nIt’s exciting to watch Kubernetes grow into the default container orchestration platform but I believe the best is yet to come: When the technology truly shifts left and every developer has access to it. That’s where GitLab comes in. With it’s deep focus on the developer workflow, the product brings efficiency, collaboration, and governance to teams sprawling the world wide web (a la GitLab itself) or small groups working out of a garage. When everything’s in the MR, everything is accessible including details on your kubernetes pods. I invite you to learn more about how we [integrate with Kubernetes](/solutions/kubernetes/).\n\n> “The only way in my opinion to make it easier for most end users to have a \"cloud-native\" experience is to provide a more end-to-end platform, a way that people can come together and they can edit code and review code and then actually do CI on that code and get that code shipped out to containers and have it be run with appropriate load balancing and observability.” — Matt Klein, Systems Engineer at Lyft\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/w0cZuG2Fcwo\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n*\u003Csmall>Video directed and produced by [Aricka Flowers](/company/team/#arickaflowers).\u003C/small>*\n\n## Let's connect!\n\n[Meet us at booth S21](https://about.gitlab.com/events/kubecon/) for CI office hours, tanuki adventures, and iPad giveaways!\n\nI'd love to help any CNCF projects (and other folks!) consider [GitLab CI](/solutions/continuous-integration/). If you are interested, [DM me on Twitter](https://twitter.com/pritianka) and we can sit down and discuss.\n\n## Join us for these events\n\n### Monday, May 20\n\n#### Cloud-Native Transformation Summit Hosted by Sysdig | 9:00 am - 12:15 pm\n\nJoin Priyanka Sharma, Director of Technical Evangelism at GitLab, at this zero day KubeCon event. This event will look at how enterprise organizations are moving into production-level Kubernetes and transforming their applications and infrastructure operations into Cloud-Native technologies.\n[Learn more here](https://go.sysdig.com/cloud_native_transformation_summit_2019.html).\n\n#### Zero Trust in the Cloud Native Era at Cloud Native Security Day | 11:00 - 11:30 am\n\nPriyanka Sharma, Director of Technical Evangelism at GitLab covers zero trust in the era of cloud native. [Register here](https://go.twistlock.com/cloudnativesecurityday#agenda).\n\n#### The Future of CI/CD with Kubernetes | 2:40 - 3:20 pm\n\nJoin Dan Lorenc, Software Engineer at Google, Carlos Sanchez, Principal Software Engineer at CloudBees, and Priyanka Sharma, Director of Technical Evangelism at GitLab, and Rob Zuber, CTO at CircleCI for a discussion on the future of CI/CD with Kubernetes.[Learn more here](https://sched.co/N6FQ).\n\n#### Barcelona Free Software Meetup: Working in the Open with GitLab, Kubic with openSUSE | 7-9 pm\n\nJoin Jason Plum, a Senior Software Engineer, Distribution at GitLab, for a talk on GitLab’s open-core product. He’ll discuss contributing back to the community directly, as well as sharing insights on changing from Closed to Open.\n[RSVP here](https://www.meetup.com/Barcelona-Free-Software/events/260656266/).\n\n### Tuesday, May 21\n\n#### Tutorial: Cloud-Agnostic Serverless - Sebastien Goasguen, TriggerMesh & Priyanka Sharma, GitLab | 11:05 am - 12:30 pm\n\nIn this tutorial, we will leverage Knative, Google's Kubernetes-based open source platform to build, deploy, and manage modern serverless workloads. We will push serverless functions and apps to production on any cloud of choice and switch the provider as necessary. We will leverage GitLab and TriggerMesh technology in the tutorial and also share how developers can use other options.\nSign up for the tutorial through the KubeCon schedule [here](https://sched.co/MPgx).\n\n#### Multicloud 360 Event | 8:30 pm - Midnight\n\nJoin GitLab, Upbound, DigitalOcean, Google Cloud and CockroachDB for 360 views of Barcelona and a discussion of multicloud. [RSVP here](https://www.eventbrite.com/e/multicloud-360-tickets-60623662005) to reserve your spot.\n\n### Wednesday, May 22\n\n#### The Serverless Landscape and Event Driven Futures - Dee Kumar, Linux Foundation & Priyanka Sharma, GitLab | 2:00 -2:35 pm\n\nThere is a lot of curiosity and confusion around [serverless computing](/topics/serverless/). What is it? Who is it for? Is it a replacement for IaaS, PaaS, and containers? Does that mean the days of servers are over? The CNCF created the Serverless Working Group to explore the intersection of cloud native and serverless technology. [Learn more here](https://sched.co/MPeI).\n\n## Play #tanukiadventure\n\nJoin our #tanukiadventure! Grab your game card at our booth S21 to help guide your adventure in finding GitLab's partners. At each adventure stop, learn how they work with GitLab! Once complete, each partner will provide you with an exclusive GitLab collectible pin to celebrate our awesome partnership! The first 50 attendees to collect all 8 unique Tanuki pins will win our prized GitLab Tanuki hoodie!\n",[901,727,278,9,835],{"slug":2054,"featured":6,"template":688},"kubernetes-kubecon-barcelona","content:en-us:blog:kubernetes-kubecon-barcelona.yml","Kubernetes Kubecon Barcelona","en-us/blog/kubernetes-kubecon-barcelona.yml","en-us/blog/kubernetes-kubecon-barcelona",{"_path":2060,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2061,"content":2067,"config":2073,"_id":2075,"_type":13,"title":2076,"_source":15,"_file":2077,"_stem":2078,"_extension":18},"/en-us/blog/kubernetes-overview-operate-cluster-data-on-the-frontend",{"title":2062,"description":2063,"ogTitle":2062,"ogDescription":2063,"noIndex":6,"ogImage":2064,"ogUrl":2065,"ogSiteName":675,"ogType":676,"canonicalUrls":2065,"schema":2066},"Kubernetes overview: Operate cluster data on the frontend","GitLab offers a built-in solution for monitoring your Kubernetes cluster health. Learn more about the technical design and functionality with this detailed guide.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099045/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2816%29_3L7ZP4GxJrShu6qImuS4Wo_1750099045397.png","https://about.gitlab.com/blog/kubernetes-overview-operate-cluster-data-on-the-frontend","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Kubernetes overview: Operate cluster data on the frontend\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Anna Vovchenko\"}],\n        \"datePublished\": \"2024-06-20\",\n      }",{"title":2062,"description":2063,"authors":2068,"heroImage":2064,"date":2070,"body":2071,"category":683,"tags":2072},[2069],"Anna Vovchenko","2024-06-20","Accessing real-time cluster information is crucial for verifying successful\nsoftware deployments and initiating troubleshooting processes. In this\narticle, you'll learn about GitLab's enhanced Kubernetes integration,\nincluding how to leverage the Watch API for real-time insights into\ndeployment statuses and streamlined troubleshooting capabilities. \n\n\n## What are GitLab's Kubernetes resources?\n\n\nGitLab offers a dedicated [dashboard for\nKubernetes](https://gitlab.com/groups/gitlab-org/-/epics/2493 \"Visualize the\ncluster state in GitLab\") to understand the status of connected clusters\nwith an intuitive visual interface. It is integrated into the Environment\nDetails page and shows resources relevant to the environment. Currently,\nthree types of Kubernetes resources are available:\n\n\n- pods filtered by the Kubernetes namespace\n\n- services\n\n- Flux resource\n([HelmRelease](https://fluxcd.io/flux/components/helm/helmreleases/) or\n[Kustomization](https://fluxcd.io/flux/components/kustomize/kustomizations/))\n\n\nFor these resources, we provide general information, such as name, status,\nnamespace, age, etc. It is represented similarly to what the\n[kubectl](https://kubernetes.io/docs/reference/kubectl/) command would show\nwhen run from the Kubernetes cluster. More details can be found when\nclicking each resource: The side drawer shows the list of labels,\nannotations, and detailed status and spec information presented as read-only\nYAML code blocks.\n\n\nThe information provided helps to visualize the cluster state, spot any\nissues, and debug problematic deployments right away.\n\n\n## Frontend to cluster communication: The GitLab solution\n\n\nWe have developed a range of tools and solutions to enable a seamless\nconnection and management of Kubernetes clusters within GitLab. One of the\ncore components of this system is the [GitLab agent for\nKubernetes](https://docs.gitlab.com/ee/user/clusters/agent/install/). This\npowerful tool provides a secure bidirectional connection between a GitLab\ninstance and a Kubernetes cluster. It is composed of two main components:\n**agentk** and **KAS** (Kubernetes agent server).\n\n\n![Kubernetes flow\nchart](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099055/Blog/Content%20Images/Blog/Content%20Images/image2_aHR0cHM6_1750099055229.png)\n\n\nagentk is a lightweight cluster-side component. It is responsible for\nestablishing a connection to a KAS instance and waiting for requests to\nprocess. It is proxying requests from KAS to Kubernetes API. It may also\nactively send information about cluster events to KAS.\n\n\nWhile agentk is actively communicating with the cluster, KAS represents a\nGitLab server-side component. It is responsible for:\n\n\n- accepting requests from agentk\n\n- authenticating agentk requests by querying GitLab backend\n\n- fetching the agent's configuration from a corresponding Git repository\nusing Gitaly\n\n- polling manifest repositories for GitOps support\n\n\nWe implemented the agent access rights feature to provide access from the\nGitLab frontend to the cluster in a secure and reliable way. To enable the\nfeature, the user should update the agent’s configuration file by adding the\n[user_access](https://docs.gitlab.com/ee/user/clusters/agent/user_access.html)\nsection with the following parameters: `projects`, `groups`, and `access_as`\nto specify which projects can access cluster information via the agent and\nhow it should authenticate.\n\n\nOnce this is done, the frontend can connect to the cluster by sending a\nrequest to the Rails controller, which should set a `gitlab_kas cookie`.\nThis cookie is then added to the request sent to KAS together with the agent\nID and Cross-Site Request Forgery (CSRF) token. Upon receiving the request,\nKAS checks the user’s authorization and forwards it to agentk, which makes\nan actual request to the Kubernetes API. Then the response goes all the way\nback from the agentk to KAS and finally to the GitLab client.\n\n\n![Kubernetes overview - how it\nworks](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099055/Blog/Content%20Images/Blog/Content%20Images/image6_aHR0cHM6_1750099055229.png)\n\n\nTo integrate this logic on the GitLab frontend and use it within the Vue\napp, we developed a JavaScript library:\n[@gitlab/cluster-client](https://gitlab.com/gitlab-org/cluster-integration/javascript-client).\nIt is generated from the Kubernetes OpenAPI specification using the\ntypescript-fetch generator. It provides all the Kubernetes APIs in a way\nthat can be used in a web browser.\n\n\n## Introducing the Watch API\n\n\nThe most challenging task is to provide **real-time updates** for the\nKubernetes dashboard. Kubernetes introduces the concept of watches as an\nextension of GET requests, exposing the body contents as a [readable\nstream](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API/Using_readable_streams).\nOnce connected to the stream, the Kubernetes API pushes cluster state\nupdates similarly to how the `kubectl get \u003Cresource> --watch` command works.\nThe watch mechanism allows a client to fetch the current state of the\nresource (or resources list) and then subscribe to subsequent changes,\nwithout missing any events. Each event contains a type of modification (one\nof three types: added, modified, or deleted) and the affected object.\n\n\nWithin the `WatchApi` class of the `@gitlab/cluster-client` library, we've\ndeveloped a systematic approach for interacting with the Kubernetes API.\nThis involves fetching a continuous stream of data, processing it line by\nline, and managing events based on their types. Let's explore the key\ncomponents and functionalities of this approach:\n\n\n1. Extending the Kubernetes API: Within the WatchApi class, we extend the\nbase Kubernetes API functionality to fetch a continuous stream of data with\na specified path and query parameters. This extension enables efficient\nhandling of large datasets, as the stream is processed line by line.\n  2. Decoding and event categorization: Upon receiving the stream, each line, typically representing a JSON object, is decoded. This process extracts relevant information and categorizes events based on their types.\n3. Internal data management: The `WatchApi` class maintains an internal data\narray to represent the current state of the streamed data, updating it\naccordingly as new data arrives or changes occur. \n\n4. The `WatchApi` class implements methods for registering event listeners,\nsuch as `onData`, `onError`, `onTimeout`, and `onTerminate`. These methods\nallow developers to customize their application's response to events like\ndata updates, errors, and timeouts. \n\n\nThe code also handles scenarios such as invalid content types, timeouts, and\nerrors from the server, emitting corresponding events for clients to handle\nappropriately. **With this straightforward, event-driven approach, the\n`WatchApi` class allows developers to create responsive real-time\napplications efficiently.**\n\n\n![Kubernetes overview - flow\nchart](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099055/Blog/Content%20Images/Blog/Content%20Images/image4_aHR0cHM6_1750099055231.png)\n\n\n## How is the Kubernetes overview integrated with the GitLab frontend?\n\n\nCurrently, we have two Kubernetes integrations within the product: the\nKubernetes overview section for the Environments and the full Kubernetes\ndashboard as a separate view. The latter is a major effort of representing\nall the available Kubernetes resources with filtering and sorting\ncapabilities and a detailed view with the full information on the metadata,\nspec, and status of the resource. This initiative is now on hold while we\nare searching for the most useful ways of representing the Kubernetes\nresources related to an environment.\n\n\n[The Kubernetes\noverview](https://docs.gitlab.com/ee/ci/environments/kubernetes_dashboard.html)\non the Environments page is a detailed view of the Kubernetes resources\nrelated to a specific environment. To access the cluster state view, the\nuser should select an agent installed in the cluster with the appropriate\naccess rights, provide a namespace (optionally), and select a related Flux\nresource.\n\n\nThe view renders a list of Kubernetes pods and services filtered by the\nnamespace representing their statuses as well as the Flux sync status.\nClicking each resource opens a detailed view with more information for easy\nissue spotting and high-level debugging. \n\n\n![Kubernetes overview - list of Kubernetes pods and\nservices](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099055/Blog/Content%20Images/Blog/Content%20Images/image5_aHR0cHM6_1750099055233.png)\n\n\nWe need to set up a correct configuration object that will be used for all\nthe API requests. In the configuration, we need to specify the URL provided\nby the KAS, that proxies the Kubernetes APIs; the GitLab agent ID to connect\nwith; and the CSRF token. We need to include cookies so that the\n`kas_cookie` gets picked up and sent within the request.\n\n\n```javascript\n\ncreateK8sAccessConfig({ kasTunnelUrl, gitlabAgentId }) {\n  return {\n    basePath: kasTunnelUrl,\n    headers: {\n      'GitLab-Agent-Id': gitlabAgentId,\n      ...csrf.headers,\n    },\n    credentials: 'include',\n  };\n}\n\n```\n\n\nAll the API requests are implemented as GraphQl client queries for\nefficiency, flexibility, and ease of development. The query structure\nenables clients to fetch data from various sources in one request. With\nclear schema definitions, GraphQL minimizes errors and enhances developer\nefficiency.\n\n\nWhen first rendering the Kubernetes overview, the frontend requests static\nlists of pods, services, and Flux resource (either HelmRelease or\nKustomization). The fetch request is needed to render the empty view\ncorrectly. If the frontend tried to subscribe to the Watch API stream and\none of the resource lists was empty, we would wait for the updates forever\nand never show the actual result – 0 resources. In the case of pods and\nservices, after the initial request, we subscribe to the stream even if an\nempty list was received to reflect any cluster state changes. For the Flux\nresource, the changes that the user would expect the resource to appear\nafter the initial request are low. We use the empty response here as an\nopportunity to provide more information about the feature and its setup. \n\n\n![Kubernetes overview - flux sync status\nunavailable](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099055/Blog/Content%20Images/Blog/Content%20Images/image3_aHR0cHM6_1750099055235.png)\n\n\nAfter rendering the initial result, the frontend makes additional requests\nto the Kubernetes API with the `?watch=true` query parameter in the URL. We\ncreate separate watchers for each event type – data, error, or timeout. When\nreceiving the data, we follow three steps:\n\n\n- transform the data\n\n- update the Apollo cache\n\n- run a mutation to update the connection status\n\n\n```javascript\n\nwatcher.on(EVENT_DATA, (data) => {\n  result = data.map(mapWorkloadItem);\n  client.writeQuery({\n    query,\n    variables: { configuration, namespace },\n    data: { [queryField]: result },\n  });\n\n  updateConnectionStatus(client, {\n    configuration,\n    namespace,\n    resourceType: queryField,\n    status: connectionStatus.connected,\n  });\n});\n\n```\n\n\nAs we show the detailed information for each resource, we rely on having the\nstatus, spec, and metadata fields with the annotations and labels included.\nThe Kubernetes API wouldn’t always send this information, which could break\nthe UI and throw errors from the GraphQl client. We transform the received\ndata first to avoid these issues. We also add the `__typename` so that we\ncan better define the data types and simplify the queries by reusing the\nshared fragments.\n\n\nAfter data stabilization, we update the Apollo cache so that the frontend\nre-renders the views accordingly to reflect cluster state changes.\nInterestingly, we can visualize exactly what happens in the cluster – for\nexample, when deleting the pods, Kubernetes first creates the new ones in\nthe pending state, and only then removes the old pods. Thus, for a moment we\ncan see double the amount of pods. We can also verify how the pods proceed\nfrom one state to another in real-time. This is done with the combination of\nadded, deleted, and modified events received from the Kubernetes APIs and\nprocessed in the `WatchApi` class of the `@gitlab/cluster-client` library.\n\n\n![Kubernetes overview - states of connection\nstatus](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099055/Blog/Content%20Images/Blog/Content%20Images/image1_aHR0cHM6_1750099055236.gif)\n\n\nBy default, with a single Watch request, we get a stream of events for five\nminutes, and then it hits the timeout. We need to properly reflect this on\nthe frontend so that the user is aware of any outdated information. To\nachieve this, we introduced a `k8sConnection` query together with\n`reconnectToCluster` mutation. We have a UI element – a badge with a tooltip\nto indicate the connection status. It has three states: connecting,\nconnected, and disconnected. The state gets updated within every step of the\nUX flow. First, we set it to `connecting` once the Watch client gets\ncreated. Then we update it to `connected` with the first received piece of\ndata. Last, we trigger the mutation for `disconnected` state when an error\nor timeout event occurs. This way, we can let the user refresh the view and\nreconnect to the stream without the need of refreshing the browser tab.\nRelying on the user action to reconnect to the stream helps us save\nresources and only request the necessary data while ensuring the accurate\ncluster state is available for the user at any time.\n\n\n## What’s next?\n\n\nLeveraging the Kubernetes built-in functionality for watching the Readable\nstream helped us to build the functionality quickly and provide the\nKubernetes UI solution to our customers, getting early feedback and\nadjusting the product direction. This approach, however, presented technical\nchallenges, such as the inability to utilize the GraphQl subscriptions and\nthe need for reconnecting to the stream.\n\n\nWe are planning our next iterations to enhance the Kubernetes overview\nwithin GitLab UI. One of the planned iterations for the feature,\n[Frontend-friendly Kubernetes Watch\nAPI](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/issues/541),\nis an updated mechanism of batch-watching the cluster data and moving from\nthe fetch Readable stream to WebSockets. We are going to create a new API in\nKAS to expose the Kubernetes watch capability via WebSocket. This should\nreduce the complexity of the JavaScript code, resolve the timeout issue, and\nimprove the compatibility of the Kubernetes APIs within GitLab frontend\nintegrations.\n\n\n> Curious to learn more or want to try out this functionality? Visit our\n[Kubernetes Dashboard\ndocumentation](https://docs.gitlab.com/ee/ci/environments/kubernetes_dashboard.html)\nfor more details and configuration tips.\n",[9,984,748],{"slug":2074,"featured":90,"template":688},"kubernetes-overview-operate-cluster-data-on-the-frontend","content:en-us:blog:kubernetes-overview-operate-cluster-data-on-the-frontend.yml","Kubernetes Overview Operate Cluster Data On The Frontend","en-us/blog/kubernetes-overview-operate-cluster-data-on-the-frontend.yml","en-us/blog/kubernetes-overview-operate-cluster-data-on-the-frontend",{"_path":2080,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2081,"content":2086,"config":2091,"_id":2093,"_type":13,"title":2094,"_source":15,"_file":2095,"_stem":2096,"_extension":18},"/en-us/blog/kubernetes-terminology",{"title":2082,"description":2083,"ogTitle":2082,"ogDescription":2083,"noIndex":6,"ogImage":996,"ogUrl":2084,"ogSiteName":675,"ogType":676,"canonicalUrls":2084,"schema":2085},"Understand Kubernetes terminology from namespaces to pods","Kubernetes can be a critical piece of successful DevOps but there's a lot to learn. We explain the terms and share a hands-on demo.","https://about.gitlab.com/blog/kubernetes-terminology","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Understand Kubernetes terminology from namespaces to pods\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Valerie Silverthorne\"}],\n        \"datePublished\": \"2020-07-30\",\n      }",{"title":2082,"description":2083,"authors":2087,"heroImage":996,"date":2088,"body":2089,"category":790,"tags":2090},[680],"2020-07-30","\n\n_If you're brand new to Kubernetes, you'll want to start with our [Kubernetes 101 guide](/blog/kubernetes-101/)._\n\nKubernetes and containers are often seen as two key elements in a [successful DevOps practice](/topics/devops/). But there's no question that Kubernetes can be intimidating to those not familiar with it. In fact, our [2020 Global DevSecOps Survey](/developer-survey/) found just 38% of respondents are actively using Kubernetes today while 50% are not. Anecdotally though, interest in Kubernetes is very high:\n\n_\"We are on the path to get our monolithic server into a sert of microservices and the goal is to use Kubernetes to help on this side.\"_\n\n_\"We're trying to get there.\"_\n\n_\"It's a priority for our platform team.\"_\n\nThis past spring staff distribution engineer [Jason Plum](/company/team/#WarheadsSE) and senior distribution engineer [Gerard Hickey](/company/team/#ghickey) walked attendees at GitLab's company-wide meeting Contribute through something they called _Kubernetes 102_ that looked at the practical building blocks required for a cloud-native application on [Kubernetes](https://kubernetes.io). As Jason puts it in the [video](https://www.youtube.com/watch?v=jdKXBJLHP8I&feature=emb_title), \"what we're trying to do here is to not just say, 'Look at all the magic we do' but actually explain the things we're doing right.\" Although this was a \"laptops out\" demo, here's a look at the key concepts and Kubernetes terminology you'll need to understand followed by a link to the entire presentation if you'd like to dive right in.\n\n## Start with containers\n\nA container is not a jail, but a jail is a container, Jason explains. \"A container is a way of packaging an application so that it is portable. It's contained, hence (the term) 'container' and it's immutable. It's the runtime requirements to actually execute and package that up in an immutable form that you can hand to someone.\"\n\nBut containers can have a tendency to get out of hand so you need something to help keep track. That's where Kubernetes comes in, Jason says in the presentation. \"So what is Kubernetes at a high level? I've seen orchestrator, I've seen management system and I've seen coordinator. Kubernetes is all of those things.\"\n\nKubernetes weaves both containers and software-defined networking together, creating \"a platform you can deploy onto with a clear syntax,\" Jason says. \"That syntax is replicable and not vendor bound so that you can deploy it anywhere that supports the official behaviors. Its job is to start containers, keep them running and make sure they're still running. That's what its job is really about.\"\n\n## Unpacking the moving parts\n\nIf you want to get more familiar with Kubernetes, it helps to understand the unique terminology, Jason stresses. Here are key terms that will help to explain the processes involved in running Kubernetes:\n\n**Namespaces**: In Kubernetes, [the namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) is effectively your working area. It's like a project in GCP or a similar thing in AWS.\n\n**Pods**: [A pod](https://kubernetes.io/docs/concepts/workloads/pods/) is effectively a unit of work. It is a way to describe a series of containers, the volumes they might share, and interconnections that those containers within the pod may need. You can have a pod that has a single container in it (or more than one container). Pods are flexible, too: Update one and it becomes version two, and version one is taken out, giving you a rolling update. As Jason spells out, \"It gives us a way to say, 'I always want to have three and still be able to migrate an application live from one version to another version without having downtime.'\n\n**Service**: Kubernetes \"has a concept of [a service](https://kubernetes.io/docs/concepts/services-networking/service/),\" Jason says. \"It can be thought of as like a load balancer for pods. It knows which pods are alive, healthy, and ready to respond so that when we try to access whatever pod we want to get to instead of to connect to the deployment and getting the one we get, and then always asking that pod for work.\"\n\n**Ingress**: This works with the service to make sure everything ends up in the right place. [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) can also provide load balancing.\n\n**ConfigMaps**: This is an API object for storing information in key-value pairs. \"A [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) is very useful for doing things like pre-stashing environment variables or files that can actually be mounted directly into pods without actually having to have an actual file system somewhere,\" Jason says, adding that they're not meant for confidential data.\n\n**Secrets**: [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) are an object and a place to store confidential information as the name implies.\n\nNow that you have the Kubernetes terminology down, watch the entire presentation here:\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube-nocookie.com/embed/jdKXBJLHP8I\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\n**Read more about Kubernetes**:\n\n* [Keep your Kubernetes runners moving](/blog/best-practices-for-kubernetes-runners/)\n\n* Set up GitLab CI/CD on [Google Kubernetes Engine](/blog/gitlab-ci-on-google-kubernetes-engine/) in 15 minutes!\n\n* Create a [Kubernetes cluster](/blog/gitlab-eks-integration-how-to/) on Amazon EKS\n\nCover image by [Matti Johnson](https://unsplash.com/@matti_johnson) on [Unsplash](https://unsplash.com)\n{: .note}\n\n## Read more on Kubernetes:\n\n- [How to install and use the GitLab Kubernetes Operator](/blog/gko-on-ocp/)\n\n- [Threat modeling the Kubernetes Agent: from MVC to continuous improvement](/blog/threat-modeling-kubernetes-agent/)\n\n- [How to deploy the GitLab Agent for Kubernetes with limited permissions](/blog/setting-up-the-k-agent/)\n\n- [A new era of Kubernetes integrations on GitLab.com](/blog/gitlab-kubernetes-agent-on-gitlab-com/)\n\n- [What we learned after a year of GitLab.com on Kubernetes](/blog/year-of-kubernetes/)\n",[9,685,727],{"slug":2092,"featured":6,"template":688},"kubernetes-terminology","content:en-us:blog:kubernetes-terminology.yml","Kubernetes Terminology","en-us/blog/kubernetes-terminology.yml","en-us/blog/kubernetes-terminology",{"_path":2098,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2099,"content":2105,"config":2112,"_id":2114,"_type":13,"title":2115,"_source":15,"_file":2116,"_stem":2117,"_extension":18},"/en-us/blog/kubernetes-the-container-orchestration-solution",{"title":2100,"description":2101,"ogTitle":2100,"ogDescription":2101,"noIndex":6,"ogImage":2102,"ogUrl":2103,"ogSiteName":675,"ogType":676,"canonicalUrls":2103,"schema":2104},"Kubernetes: Get to know the container orchestration solution","Kubernetes, also known as K8s, is a must-have solution for deploying and maintaining applications, especially in the cloud. Learn the basics of Kubernetes with this introductory guide.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749660215/Blog/Hero%20Images/kubernetes-container-orchestration-solution.jpg","https://about.gitlab.com/blog/kubernetes-the-container-orchestration-solution","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Kubernetes: Get to know the container orchestration solution\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"GitLab Team\"}],\n        \"datePublished\": \"2024-07-25\",\n      }",{"title":2100,"description":2101,"authors":2106,"heroImage":2102,"date":2108,"body":2109,"category":876,"tags":2110,"updatedDate":2111},[2107],"GitLab Team","2024-07-25","Kubernetes automates the tasks of deploying and managing containerized applications on a large scale. Over time, Kubernetes has become an essential tool for developing applications in many areas, such as [microservices](https://about.gitlab.com/topics/microservices/), web applications, and databases. Its performance and scalability make it a recognized standard in container management today.\n\nDiscover everything you need to know about Kubernetes in this article.\n\n## What is Kubernetes?\n\nKubernetes is an open-source system for efficiently orchestrating the containers of a software application. Containerization is a widely acclaimed approach to developing applications, especially in the areas of digital transformation and the cloud.\n\nIf you're not familiar with the concept of containers, note that it is an application development method that groups the components of an application into standardized units – or containers – that are independent of the devices and operating systems they are located on. By isolating applications from their environment, this technology facilitates their deployment and portability, as well as reduces interoperability conflicts.\n\nThis is where we use the Kubernetes software. Certainly, containers allow applications to be divided into smaller and autonomous modules, thus facilitating their deployment. However, for containers to interact within an application, a management system encompassing these modules is necessary. That's exactly what Kubernetes does. Kubernetes provides a platform to control where and how containers run, so you can orchestrate and schedule their execution to manage containerized applications on a large scale.\n\n> Browse [GitLab articles about Kubernetes](https://about.gitlab.com/blog/tags/kubernetes/).\n\n## How does a Kubernetes architecture work?\n\nTo understand how a Kubernetes architecture works, it is essential to become familiar with certain concepts, starting with that of the cluster, which is the most extensive within the architecture. A Kubernetes cluster is defined as the set of virtual or physical machines on which a containerized application is installed.\n\n![Components of Kubernetes](https://res.cloudinary.com/about-gitlab-com/image/upload/v1749673941/Blog/Content%20Images/components-of-kubernetes.png)\n\nSource: [Kubernetes](https://kubernetes.io/docs/concepts/overview/components/).\n\nThis cluster comprises different elements:\n- Node: This is a work unit in a Kubernetes cluster. It is a virtual or physical machine that performs tasks on behalf of the application.\n- Pod: A pod is the smallest deployable unit in Kubernetes. It is a group of containers working together on the same node. Containers inside a pod share the same network and can communicate with each other via localhost.\n- Service: A Kubernetes service exposes a pod to the network or other pods. It offers a stable and well-defined access point to applications hosted by pods.\n- Volume: A folder abstraction that solves problems of sharing and retrieving files within a container.\n- Namespace: A namespace allows you to group and isolate resources to form a virtual cluster.\n\nThe Kubernetes architecture is based on two main types of nodes: the master node and the worker nodes. The master node is responsible for the overall management of the Kubernetes cluster and communication with the worker nodes. Among its key components, the API is the central point of contact for all communications between users and the cluster. The [etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd) is the key-value database where the configurations, the system state and the object metadata, are stored. The controller manager coordinates background operations such as pod replication, and the scheduler places pods on nodes based on available resources.\n\nWorker nodes, on the other hand, are the machines that run and manage the applications contained in the pods. Within them, the [kubelet](https://kubernetes.io/docs/concepts/overview/components/#kubelet) is the agent that runs on each node and communicates with the master to receive the commands and transmit the status of the pods. The network proxy or [kube-proxy](https://kubernetes.io/docs/concepts/overview/components/) maintains network rules on nodes to allow access to services from outside the Kubernetes cluster. Finally, the container runtime is the software responsible for the execution and management of containers within the pods.\n\n### Docker's role\n\nAmong all the components of a K8s cluster, the choice of runtime within the worker nodes is important. Different software is available for this, such as rkt or CRI-O, but Docker is the most commonly used tool.\n\n### What is the difference between Docker and Kubernetes?\n\nDocker is an open-source solution that is specifically used at the container level. It allows containers to be packaged in a standardized and lightweight format, which increases their portability in different environments. It is therefore a complementary tool to K8s that facilitates the management of containers themselves, while Kubernetes simplifies their integration and communication within the application.\n\n## What are the benefits of Kubernetes?\n\nLaunched by Google in 2014, the first stable version of Kubernetes appeared in July 2015. Since then, the popularity of this software has not wavered, making K8s a benchmark in the field of container orchestration, especially for microservice-oriented architectures. So then, why use Kubernetes? This success is primarily due to the excellent performance of this software in container orchestration.\n\nThe benefits of Kubernetes are plenty, as follows:\n- Automation: Kubernetes facilitates the automation of tasks related to the deployment, scaling, and updating of containerized applications.\n- Flexibility: The software adapts to different container technologies, as well as various hardware architectures and operating systems.\n- Scalability: K8s facilitates the deployment and management of thousands of containers, regardless of their status: running, paused, or stopped.\n- Migration: It is possible to easily migrate applications to Kubernetes without having to change the source code.\n- Multi-cluster support: Kubernetes centrally manages multiple container clusters distributed across different infrastructures.\n- Update management: The software supports rolling update deployments to update applications without service disruption.\n\n## A robust and scalable ecosystem\n\nKubernetes stands out for its ability to manage containers efficiently and securely, while maintaining its independence from cloud infrastructure providers. Its modular architecture adapts to the specific needs of each company and supports a very wide range of applications and services (web services, data processing, mobile applications, etc.).\n\nIn the race for digital transformation, Kubernetes also wins over people, thanks to its rich and scalable ecosystem within the open-source community. Managed by the Cloud Native Computing Foundation ([CNCF](https://www.cncf.io/)), K8s is supported by thousands of developers around the world. They contribute to the development of the project and the continuous improvement of its features.\n\n## What are the limitations of Kubernetes?\n\nThe benefits of Kubernetes make it a safe choice for many development teams in the cloud-native application space. Nevertheless, it is worth pointing out some of its limitations. Kubernetes requires a solid technical background and training in new development concepts and methods. The software can be complex to configure at the beginning of a project. However, configuration is crucial, especially to secure the platform. Having an experienced development team for K8s projects is therefore a significant asset.\n\nAnother challenge is the implementation and maintenance of a K8s architecture, which also requires time and resources, especially to update its various components and software. This raises the question of possible oversizing. In the case of a small application, or a project with no particular challenge in terms of scalability, a more basic architecture may suffice while being more economical.\n\n## Using Kubernetes within your teams\n\nTens of thousands of companies have adopted a Kubernetes architecture to carry out their digital transition. K8s is used by companies of all sizes, from startups to multinationals.\n\nThere are many examples of successful integrations, such as for Haven Technologies. Haven Technologies has migrated its SaaS services to K8s and relies in particular on a Kubernetes strategy with the GitLab DevSecOps platform to help its teams improve efficiency, security, and speed of software development. Check out [our client story](https://about.gitlab.com/customers/haven-technologies/) to learn more!\n\n## Kubernetes, Git, and GitLab\n\nKubernetes, Git, and GitLab are essential elements of the DevOps landscape. Kubernetes offers great flexibility to deploy and manage the various components of an application, while GitLab, which is built around Git and its native version control system, allows rigorous and accurate tracking of source code and changes, while providing a comprehensive suite of tools to manage the entire software development lifecycle.\n\nThis combination, together with a [GitOps approach](https://about.gitlab.com/topics/gitops/), which aims to automate the provisioning of modern cloud infrastructures, creates an agile environment for application development and deployment, thus making it possible to provide powerful, flexible, and scalable software. For more details, discover all [GitLab solutions to launch an application with Kubernetes](https://about.gitlab.com/solutions/kubernetes/).\n\n## Kubernetes FAQ\n### What are the competing solutions to K8s?\n\nThere are several alternatives to Kubernetes, including Docker Swarm, and Marathon. However, Kubernetes is considered the most mature and popular solution on the market. Its broad user base, abundant documentation, and active community support make Kubernetes an excellent choice for those looking to adopt a container orchestration system.\n\n### What is a Kubernetes cluster?\n\nA Kubernetes cluster is composed of a master node and several worker nodes. The master node is responsible for coordinating the tasks in the cluster, while the worker nodes execute these orchestration tasks and host the containers. K8s clusters are highly scalable – nodes can be added or removed to adapt cluster resources to the needs of the application.\n\n### How to get started with Kubernetes?\n\nTo begin, it is necessary to install the Kubernetes software on a compatible environment (Linux, macOS, or Windows). Kubernetes can be installed in a traditional hosting environment, but also in a cloud environment (Google Kubernetes Engine or Amazon EKS, for example). Users can download and install Kubernetes directly from their official site, and then proceed with the initial configuration necessary to connect the master and worker nodes. Once this step is completed, users are ready to deploy a first application using Kubernetes.\n\n### Why choose Kubernetes?\n\nKubernetes offers great flexibility and total portability between different cloud platforms or on-site infrastructures. By automating orchestration tasks, K8s helps to optimize resources, reduce operating costs, and free up time for developers and system administrators. Finally, the Kubernetes ecosystem is vast and is continuously developed by a large open-source community, enabling rapid innovation.\n\n## Learn more\n\n- [How to stream logs through the GitLab Dashboard for Kubernetes](https://about.gitlab.com/blog/how-to-stream-logs-through-the-gitlab-dashboard-for-kubernetes/)\n- [Kubernetes overview: Operate cluster data on the frontend](https://about.gitlab.com/blog/kubernetes-overview-operate-cluster-data-on-the-frontend/)\n- [Simplify your cloud account management for Kubernetes access](https://about.gitlab.com/blog/simplify-your-cloud-account-management-for-kubernetes-access/)\n",[9,835],"2024-08-22",{"slug":2113,"featured":6,"template":688},"kubernetes-the-container-orchestration-solution","content:en-us:blog:kubernetes-the-container-orchestration-solution.yml","Kubernetes The Container Orchestration Solution","en-us/blog/kubernetes-the-container-orchestration-solution.yml","en-us/blog/kubernetes-the-container-orchestration-solution",{"_path":2119,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2120,"content":2126,"config":2133,"_id":2135,"_type":13,"title":2136,"_source":15,"_file":2137,"_stem":2138,"_extension":18},"/en-us/blog/leah-petersen-user-spotlight",{"title":2121,"description":2122,"ogTitle":2121,"ogDescription":2122,"noIndex":6,"ogImage":2123,"ogUrl":2124,"ogSiteName":675,"ogType":676,"canonicalUrls":2124,"schema":2125},"From motorcycle stunter to DevOps: Finding love for CI/CD","Switching to GitLab helped a newly minted DevOps engineer grasp the concept of CI/CD.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749663760/Blog/Hero%20Images/image-for-leah-post.jpg","https://about.gitlab.com/blog/leah-petersen-user-spotlight","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Motorcycle stunter turned DevOps engineer says GitLab helped her learn to \"love\" CI/CD\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Aricka Flowers\"}],\n        \"datePublished\": \"2018-06-21\",\n      }",{"title":2127,"description":2122,"authors":2128,"heroImage":2123,"date":2130,"body":2131,"category":790,"tags":2132},"Motorcycle stunter turned DevOps engineer says GitLab helped her learn to \"love\" CI/CD",[2129],"Aricka Flowers","2018-06-21","\nWhen professional motorcycle stuntwoman turned developer Leah Petersen switched from Jenkins to GitLab, she was a bit nervous to say the least. Having only worked in tech for nine months, the [Samsung SDS](https://www.samsungsds.com/us/en/index.html) engineer was not enthused about the prospect of having to learn a new application after feeling like she had “just started to get competent” with Jenkins.\n\nAfter a self-described mini pity party, she dove into GitLab head first, jumping into a few big ticket projects to get a handle on the landscape. Within a few short months, Petersen was so impressed by her GitLab CI/CD experience that she felt the need to shout her newfound “love” for continuous integration and continuous delivery from the virtual mountaintop of [her blog](https://leahnp.github.io/2018/moving-from-jenkins-to-gitlab-CI/).\n\nWe recently met up with Petersen to learn more about her transition to the tech world and experience with GitLab.\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/Avx_RftRT_o\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\n### Q & A with Leah Petersen, DevOps Engineer\n\n**Where do you work and what does your team do?**\n\nI work for a team in Samsung SDS called the Cloud Native Computing Team, and I'm [a DevOps engineer](https://about.gitlab.com/topics/devops/what-is-a-devops-engineer/). We deal primarily with containers in Kubernetes and helping companies modernize and move to the cloud. My team is super unique. We were kind of treated like an incubated startup within Samsung, so we're really given a lot of autonomy to make our own decisions.\n\nOur team was put together about five years ago, and Samsung really made a bet on Kubernetes being the future of orchestrating huge workloads in the cloud. Initially, we were focusing mainly on research and development, contributing to the Kubernetes community and learning who was a part of it, what their motives were, and how we could find our place in it. Over the last year, Samsung has really pivoted our role in the company, and we're looking at how we can help Samsung as a global organization move to Kubernetes and containers.\n\n**Where did you work before Samsung?**\n\nI was a motorcycle stunt rider before I became an engineer, and that career kind of organically grew out of my passion for motorcycles. I started stunting, loved the community and was able to meet people all over the country and travel. Being one of the few women who did it, I organically started getting calls for jobs and gigs. I thought, “If I can do this in my 20s and make this my full-time career, I'm definitely going to take a shot at it,” so I did.\n\nIt was an amazing opportunity and experience to travel the world and meet people all over this planet who are passionate about this crazy thing that I'm also passionate about. And I got to work with a lot of amazing brands and raise awareness about the sport that I love. So, I don't have any regrets about that and cherish the time that I got to spend on a motorcycle professionally.\n\n**How did you move from being a professional motorcycle stunter to a DevOps engineer?**\n\nI had been looking for a new career path and wasn't really sure what I was going to do. I knew that I wanted to build some tangible skills. I wanted skills that had a clear market value, and tech definitely provides that.\n\nI ended up taking an online coding course in Python, and had this “aha” moment where I realized, not only can I do this, which I didn't think was previously possible, but it's fun; I really like solving these problems. At that point I started taking more online courses and learning as much as I could for free. Then I ended up finding [Ada Developers Academy](https://www.adadevelopersacademy.org/), and that was the perfect segue into the industry.\n\n> I had this “aha” moment where I realized, not only can I do this, which I didn't think was previously possible, but it's fun\n\n**Can you describe how your experience has been as woman in tech?**\n\nYou definitely get a lot of strange reactions being a woman in tech. Walking into a situation, oftentimes people are surprised you're an engineer. You'll get reactions like, “Oh, I thought you were a project manager,” or, “I thought you were a recruiter,” or whatever other stereotype that you brought into the room. That can be discouraging and makes you feel unwelcome in that space. But I think we need women in every part of tech: frontend, backend, DevOps, operations, everything. If your interest is in UX, go for that. But don't let all the men who've been in the industry for 25 years on the operations side of things scare you off either. I really think we need diverse minds and approaches to problems in the whole spectrum of it.\n\nSometimes I forget about the gender disparity in tech because my team, specifically, has a couple of really amazing women who I get to work with every day. So, I'm very fortunate. But I recently went to KubeCon in Copenhagen, and it's a amazing conference with so much energy, but it's a real wake up call when you see the gender disparity there. There's 4,000 guys walking around and you feel like you stick out [or] when you're sitting in an auditorium, look around and realize, “Oh, I'm the only lady here.” It's something that you can't look away from.\n\n**Why did you decide to go into DevOps engineering?**\n\nIn my boot camp classes we were focusing on web development and building Ruby on Rails and Node.js apps. We each had an opportunity to do an internship at companies in Seattle that support the Ada program. Samsung was one of them, and they came in to do a presentation about their involvement in open source and Kubernetes. I had no idea what they were talking about, but Kubernetes and the momentum of the open source community was really appealing to me. So I took a chance and picked Samsung, dove right in, and found my way as I went along. I'm really happy that I chose Kubernetes and to specialize in the cloud.\n\n>Kubernetes and the momentum of the open source community was really appealing to me. So I took a chance, dove right in, and found my way as I went along\n\n**How did you get started with GitLab CI/CD? And how would you describe your transition to the application?**\n\nI always felt like I was fighting with the CI platform we were on prior to GitLab. It was never really functioning how we wanted it to, and something was always kind of failing. The whole reason you have CI/CD is to get visibility into what's happening with your code, right? You want to run your code through this pipeline and make sure there are no bugs, that you’re packaging it correctly and putting it in the places that you need it to be in production. It's this hugely critical component of going from the developer's computer to the world; that's the pipeline. So you really need the visibility to see what is happening every step of the way.\n\nOn the old system, I felt that I just didn't have that visibility. I was digging for the problems and not able to understand where they were coming from, where they were originating from, why they were happening or how to fix them. I feel like GitLab definitely does a great job of assisting the user in finding the origin of a problem, tracing that step back and making it clear where your issues are and when you're having success.\n\n**How has using GitLab impacted your career and workflow?**\n\nThere's a lot of talk about accessibility and user experience in tech. And we all know what it's like to have a bad user experience with a piece of technology; it's the most frustrating thing in the entire world. As a developer, you deal with lots of different tech every single day. When I started using GitLab about a year and a half into my career, it was certainly the first platform where I was like, ‘I feel so at home here. Everything’s fluid. I can find where everything is. I understand what everything is.’ There aren't these big black holes of confusion that have me asking, “Why does this exist and what am I doing here?’”\n\nWith GitLab, everything is just this cheery, happy place. And I really appreciate how it has now set the bar for me when it comes to the way in which a technology should function when I’m working with it.\n\nCover photo by [Rendiansyah Nugroho](https://unsplash.com/photos/JUePy_-uOSI) on [Unsplash](https://unsplash.com/)\n{: .note}\n",[900,9,727,835,1347,268,685,707],{"slug":2134,"featured":6,"template":688},"leah-petersen-user-spotlight","content:en-us:blog:leah-petersen-user-spotlight.yml","Leah Petersen User Spotlight","en-us/blog/leah-petersen-user-spotlight.yml","en-us/blog/leah-petersen-user-spotlight",{"_path":2140,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2141,"content":2147,"config":2153,"_id":2155,"_type":13,"title":2156,"_source":15,"_file":2157,"_stem":2158,"_extension":18},"/en-us/blog/microservices-integrated-solution",{"title":2142,"description":2143,"ogTitle":2142,"ogDescription":2143,"noIndex":6,"ogImage":2144,"ogUrl":2145,"ogSiteName":675,"ogType":676,"canonicalUrls":2145,"schema":2146},"Tackling the microservices repository explosion challenge","Microservices have spawned an explosion of dependent projects with multiple repos, creating the need for an integrated solution – we're working on it right now.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749662898/Blog/Hero%20Images/microservices-explosion.jpg","https://about.gitlab.com/blog/microservices-integrated-solution","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"It's raining repos: The microservices repo explosion, and what we're doing about it\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Aricka Flowers\"}],\n        \"datePublished\": \"2018-11-26\",\n      }",{"title":2148,"description":2143,"authors":2149,"heroImage":2144,"date":2150,"body":2151,"category":683,"tags":2152},"It's raining repos: The microservices repo explosion, and what we're doing about it",[2129],"2018-11-26","\nGone are the days of \"set it and forget it\"-style software development. The increased demand for code and operations on all projects, especially [microservices](/topics/microservices/), means more repos. This calls for a more integrated solution to incorporate testing, security updates, monitoring, and more, says GitLab CEO [Sid Sijbrandij](/company/team/#sytses):\n\n>\"The bar's going up for software development. It's no longer enough to just write the code; you also have to write the tests. It's no longer enough to just ship it; you also have to monitor it. You can no longer make it once and forget about it; you have to stay current with security updates. For every product you make you have to integrate more of these tools. It used to be that only the big projects got all these things, but now every single service you ship should have these features, because other projects are dependent on it. One security vulnerability can be enough to take a company down.\"\n\nAn increasing number of project repos means exponential growth in the number of tools needed to handle them – bad news for those saddled managing project dependencies. A streamlined workflow is essential to alleviate this burden – here's how we want to help you get there.\n\n### Everything under one roof\n\n\"With GitLab, we want to enable you to simply commit your code and have all the tools you need integrated out of the box,\" Sid said. \"You don't have to do anything else. It's monitored; we measure whether your dependencies have a vulnerability and fix it for you automatically. I think that's the big benefit of GitLab; that you don't have to go into stitching together 10 tools for every project that you make.\"\n\nBy using an integrated solution to manage an ever-growing number of microservices, you can avoid having engineers siloed off with their respective teams and tools. Creating visibility among teams and getting rid of the need for handoffs leads to a faster [DevOps lifecycle](/topics/devops/) while also ensuring that your projects deploy and remain stable, Sid explains.\n\n\"Our customers that switched from a fragmented setup and were only able to get projects through that cycle a few times a year are now deploying a few times a week,\" Sid said. \"The ability to go from planning to monitoring it in production is what GitLab brings to the table. We have an ample amount of customer case studies showing how we helped improve their speed.\"\n\n### Better support for microservices\n\nWe are boning up our support of microservices, and have a number of features in the works to improve this area, including [group level Kubernetes clusters](https://gitlab.com/gitlab-org/gitlab-ce/issues/34758), a [global Docker registry browser](https://gitlab.com/gitlab-org/gitlab-ce/issues/49336), and adding the [ability to define multiple pipelines](https://gitlab.com/gitlab-org/gitlab-ce/issues/22972). This is to build on what's already there:\n\n\"We have great support for microservices. GitLab has [multi-project pipelines](/blog/use-multiproject-pipelines-with-gitlab-cicd/) and [can trigger pipelines from multi-projects via API](https://docs.gitlab.com/ee/ci/jobs/ci_job_token.html),\" Sid detailed. \"The CI Working Group of the CNCF (Cloud Native Computing Foundation), the most cloud native organization in the world probably, uses GitLab to test their projects. We've got great support for things like [Kubernetes](/solutions/kubernetes/) and cloud native technologies. In GitLab, every project you have can be attached to a Kubernetes cluster, and GitLab uses that to run everything that’s going on. We know that a lot of our users and customers are using microservices, and we work great with them.\"\n\n### Future focus: best-in-class solutions\n\nGitLab is much more than just version control. Having started with the planning, creating and verifying stages in 2011 and 2012, we’ve had time to make those capabilities very strong. We are now strengthening our offerings in the other steps of the DevOps lifecycle: managing, packaging, releasing, configuring, monitoring and security.\n\n\"We are seeing enormous progress in those areas, but they can't go head to head with the best-in-class solutions just yet. So that's going be the theme for GitLab next year, to make sure each of our solutions is best in class instead of just the three things we started with,\" Sid says. \"And we won't take our eyes off the ball.\"\n\n[Cover image](https://unsplash.com/photos/wplxPRCF7gA) by [Ruben Bagues](https://unsplash.com/@rubavi78) on Unsplash\n{: .note}\n",[814,232,9,855,902],{"slug":2154,"featured":6,"template":688},"microservices-integrated-solution","content:en-us:blog:microservices-integrated-solution.yml","Microservices Integrated Solution","en-us/blog/microservices-integrated-solution.yml","en-us/blog/microservices-integrated-solution",{"_path":2160,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2161,"content":2166,"config":2171,"_id":2173,"_type":13,"title":2174,"_source":15,"_file":2175,"_stem":2176,"_extension":18},"/en-us/blog/moving-to-gcp",{"title":2162,"description":2163,"ogTitle":2162,"ogDescription":2163,"noIndex":6,"ogImage":1140,"ogUrl":2164,"ogSiteName":675,"ogType":676,"canonicalUrls":2164,"schema":2165},"We’re moving from Azure to Google Cloud Platform","GitLab.com is migrating to Google Cloud Platform – here’s what this means for you now and in the future.","https://about.gitlab.com/blog/moving-to-gcp","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"We’re moving from Azure to Google Cloud Platform\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Andrew Newdigate\"}],\n        \"datePublished\": \"2018-06-25\",\n      }",{"title":2162,"description":2163,"authors":2167,"heroImage":1140,"date":2168,"body":2169,"category":683,"tags":2170},[1284],"2018-06-25","\nUpdate Jul 19, 2018: The latest info can be found in the [GCP migration update](/blog/gcp-move-update/) blog post. \n{: .alert .alert-info}\n\nImproving the performance and reliability of [GitLab.com](/pricing/)  has been a top priority for us. On this front we've made some incremental gains while we've been planning for a large change with the potential to net significant results: moving from Azure to Google Cloud Platform (GCP).\n\nWe believe [Kubernetes](/solutions/kubernetes/) is the future. It's a technology that makes reliability at massive scale possible. This is why earlier this year we shipped native [integration with Google Kubernetes Engine](/blog/gke-gitlab-integration/) (GKE) to give GitLab users a simple way to use Kubernetes. Similarly, we've chosen GCP as our cloud provider because of our desire to run GitLab on Kubernetes. Google invented Kubernetes, and GKE has the most robust and mature Kubernetes support. Migrating to GCP is the next step in our plan to make GitLab.com ready for your mission-critical workloads.\n\nOnce the migration has taken place, we’ll continue to focus on bumping up the stability and scalability of GitLab.com, by moving our worker fleet across to Kubernetes using GKE. This move will leverage our [Cloud Native charts](https://gitlab.com/charts/gitlab), which with [GitLab 11.0](/releases/2018/06/22/gitlab-11-0-released/#cloud-native-gitlab-helm-chart-now-beta) are now in beta.\n\n## How we’re preparing for the migration\n\n### Geo\n\nOne GitLab feature we are utilizing for the GCP migration is our [Geo product](https://docs.gitlab.com/ee/administration/geo/).\nGeo allows for full, read-only mirrors of GitLab instances. Besides browsing the GitLab UI, Geo instances can be used for cloning and fetching projects, allowing geographically distributed teams to collaborate more efficiently.\n\nNot only does that allow for disaster recovery in case of an unplanned outage, Geo can also be used for a planned failover to migrate GitLab instances.\n\n![GitLab Geo - Migration](https://about.gitlab.com/images/gitlab_ee/gitlab_geo_diagram_migrate.png){: .medium.center}\n\nFollowing our mantra of dogfooding everything of our product, we are using Geo to move GitLab.com from Microsoft Azure to Google Cloud Platform. Geo is working well and scales because it's been used by many customers reliably since going GA. We believe Geo will perform well during the migration and plan this event as another proof point for its value.\n\nRead more about Disaster Recovery with Geo in our [Documentation](https://docs.gitlab.com/ee/administration/geo/disaster_recovery/).\n\n#### The Geo transfer\n\nFor the past few months, we have maintained a Geo secondary site of GitLab.com, called `gprd.gitlab.com`, running on Google Cloud Platform. This secondary keeps an up-to-date synchronized copy of about 200TB of Git data and 2TB of relational data in PostgreSQL. Originally we also replicated Git LFS, File Uploads and other files, but this has since been migrated to Google Cloud Storage object storage, in a parallel effort.\n\nFor logistical reasons, we selected GCP's `us-east1` site in the US state of South Carolina. Our current Azure datacenter is in US East 2, located in Virginia. This is a round-trip distance of 800km, or 3 light-milliseconds. In reality, this translates into a 30ms ping time between the two sites.\n\nBecause of the huge amount of data we need to synchronize between Azure and GCP, we were initially concerned about this additional latency and the risk it might have on our Geo transfer. However, after our initial testing, we realized that network latency and bandwidth were not bottlenecks in the transfer.\n\n### Object storage\n\nIn parallel to the Geo transfer, we are also migrating all file artifacts, including CI Artifacts, Traces (CI log files), file attachments, LFS objects and other file uploads to [Google Cloud Storage](https://cloud.google.com/storage/) (GCS), Google's managed object storage implementation. This has involved moving about 200TB of data off our Azure-based file servers into GCS.\n\nUntil recently, GitLab.com stored these files on NFS servers, with NFS volumes mounted onto each web and API worker in the fleet. NFS is a single-point-of-failure and can be difficult to scale. Switching to GCS allows us to leverage its built-in redundancy and multi-region capabilities. This in turn will help to improve our own availability and remove single-points-of-failure from our stack. The object storage effort is part of our longer-term strategy of lifting GitLab.com infrastructure off NFS. The [Gitaly project](https://gitlab.com/gitlab-org/gitaly), a Git RPC service for GitLab, is part of the same initiative. This effort to migrate GitLab.com off NFS is also a prerequisite for our plans to move GitLab.com over to Kubernetes.\n\n### How we're working to ensure a smooth failover\n\nOnce or twice a week, several teams, including [Geo](/handbook/engineering/development/enablement/systems/geo/), [Production](https://about.gitlab.com/handbook/engineering/infrastructure/production/), and [Quality](https://about.gitlab.com/handbook/engineering/quality/), get together to jump onto a video call and conduct a rehearsal of the failover in our staging environment.\n\nLike the production event, the rehearsal takes place from Azure across to GCP. We timebox this event, and carefully monitor how long each phase takes, looking to cut time off wherever possible. The failover currently takes two hours, including quality assurance of the failover environment.\n\nThis involves four steps:\n\n- A [preflight checklist](https://gitlab.com/gitlab-com/migration/blob/master/.gitlab/issue_templates/preflight_checks.md),\n- The main [failover procedure](https://gitlab.com/gitlab-com/migration/blob/master/.gitlab/issue_templates/failover.md),\n- The [test plan](https://gitlab.com/gitlab-com/migration/blob/master/.gitlab/issue_templates/test_plan.md) to verify that everything is working, and\n- The [failback procedure](https://gitlab.com/gitlab-com/migration/blob/master/.gitlab/issue_templates/failback.md), used to undo the changes so that the staging environment is ready for the next failover rehearsal.\n\nSince these documents are stored as issue templates on GitLab, we can use them to create issues on each successive failover attempt.\n\nAs we run through each rehearsal, new bugs, edge-cases and issues are discovered. We track these issues in the [GitLab Migration tracker](https://gitlab.com/gitlab-com/migration/issues). Any changes to the failover procedure are then made as [merge requests into the issue templates](https://gitlab.com/gitlab-com/migration/merge_requests?scope=all&state=all).\n\nThis process allows us to iterate rapidly on the failover procedure, improving the failover documentation and helping the team build confidence in the procedure.\n\n## When will the migration take place?\n\nOur absolute [top priority](https://gitlab.com/gitlab-com/migration#failover-priorities) for the failover is to ensure that we protect the integrity of our users' data. We will only conduct the failover once we are completely satisfied that all serious issues have been ironed out, that there is no risk of data loss, and that our new environment on Google Cloud Platform is ready for production workloads.\n\nThe failover is currently scheduled for Saturday, July 28, 2018. We will follow this post up shortly with further information on the event and will provide plenty of advance notice.\n\nRead the most recent update on [GitLabs journey from Azure to GCP](/blog/gitlab-journey-from-azure-to-gcp/) here!\n",[1149,727,1150,9],{"slug":2172,"featured":6,"template":688},"moving-to-gcp","content:en-us:blog:moving-to-gcp.yml","Moving To Gcp","en-us/blog/moving-to-gcp.yml","en-us/blog/moving-to-gcp",{"_path":2178,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2179,"content":2185,"config":2192,"_id":2194,"_type":13,"title":2195,"_source":15,"_file":2196,"_stem":2197,"_extension":18},"/en-us/blog/open-source-nasa-gl",{"title":2180,"description":2181,"ogTitle":2180,"ogDescription":2181,"noIndex":6,"ogImage":2182,"ogUrl":2183,"ogSiteName":675,"ogType":676,"canonicalUrls":2183,"schema":2184},"MRI Technologies used GitLab for unified toolchains to NASA","Live from GitLab Commit: NASA will be flying Kubernetes clusters to the moon and GitLab is helping.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678434/Blog/Hero%20Images/nasagitlab.jpg","https://about.gitlab.com/blog/open-source-nasa-gl","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitLab Commit: How MRI Technologies used GitLab to bring unified toolchains to NASA\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Valerie Silverthorne\"}],\n        \"datePublished\": \"2019-09-17\",\n      }",{"title":2186,"description":2181,"authors":2187,"heroImage":2182,"date":2188,"body":2189,"category":876,"tags":2190},"GitLab Commit: How MRI Technologies used GitLab to bring unified toolchains to NASA",[680],"2019-09-17","\nNASA can put [Rovers on Mars](https://mars.nasa.gov/mer/), but a complex legacy software system proved a bit of a challenge. Speaking at GitLab Commit in Brooklyn, [Marshall Cottrell](https://www.linkedin.com/in/marshall-cottrell-27b385181) of [MRI Technologies](https://www.mricompany.com) explained how the company teamed up with NASA to launch the space agency into the era of modern application development using Kubernetes and GitLab.\n\nIn September 2018 MRI began work on a new software development platform called APPDAT. \"It's the only platform taking a totally 'fresh approach' to application development and data science activities within the Agency,\" Marshall said. The team's challenge was to update an Oracle-based legacy SCM solution using open source technologies and APIs. At the time NASA had no toolchains to support CI/CD during development and lots of silos of information. \"There was no mechanism for us to disseminate innovations, best practices, or what we learned,\" Marshall said. NASA needed a unified toolchain and platform for software delivery. \"GitLab was chosen as the platform source control management solution because it is the only product in this space that integrates all stages of the DevSecOps lifecycle.\"\n\n## A laser focus helps\n\nPerhaps not surprisingly MRI had ambitious goals for APPDAT, Marshall explained. The overarching hope was to build an automated DevOps platform that served as the single source of truth. Until MRI got involved, NASA had no way to actually \"own\" the software development process; teams operated in a piecemeal fashion, choosing contractors and solutions based on situational needs rather than looking at the big picture. Those decisions left NASA subject to potentially \"abusive behavior,\" Marshall explained.\n\nSo MRI laid out a number of goals:\n\n- Empower teams to fully manage the resources they support\n- Demonstrate and promote fully open project management and collaboration\n- Create a sandbox for protoyping with no barriers to entry\n- Assemble an API and data economy that would eliminate silos and promote reusability\n- Establish platform-level security controls with a goal of \"compliant by fault\"\n\nTo get there, MRI emphasized collaboration and tried to reach out to the \"forward-leaning\" customers and individual civil servant developers, engineers and researchers who were eager to contribute. The team adhered strictly to cloud native, Zero Trust and open source approaches and, in the end, came up with a Kubernetes platform that met the space agency's needs for today and in the future. The technology choices were important, but so was the time spent laying the groundwork for a culture change. \"Many modernization proposals try to meet everyone where they're at,\" Marshall explained. \"A more opinionated approach allows us to provide a succinct and unified toolchain that all parties can contribute to, evolve, and improve over time.\"\n\nToday the 61-year old space agency has a modern platform where developers can easily collaborate with non-developers, no complex tooling is required, and context switching is a thing of the past, Marshall said. APPDAT syncs from the agency's existing SCM solutions so everyone was able to continue to use the same tools.\n\nPerhaps most exciting, NASA's plans to have astronauts established on the moon by 2024 as part of the [Artemis program](https://www.nasa.gov/what-is-artemis). That will include a data center, and Marshall is confident Kubernetes will be part of the launch.\n\n\"We’ve already begun to change minds at NASA and you can do it at your enterprise too,\" Marshall said. His last best advice: Play the long game, only innovate when it makes things easier, and a bottom-up approach is an easy way to make friends.\n\nWatch Marshall's entire presentation here:\n\n\u003Ciframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/RsUw4Ueyn-c\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen>\u003C/iframe>\n\nDon't miss out on the chance to network with others on the same DevOps journey. Get your tickets to [Commit London on October 9](/events/commit/).\n\nCover image by [David Torres](https://unsplash.com/@djjabbua) on [Unsplash](https://unsplash.com/)\n{: .note}\n",[1150,9,707,835,2191],"frontend",{"slug":2193,"featured":6,"template":688},"open-source-nasa-gl","content:en-us:blog:open-source-nasa-gl.yml","Open Source Nasa Gl","en-us/blog/open-source-nasa-gl.yml","en-us/blog/open-source-nasa-gl",{"_path":2199,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2200,"content":2205,"config":2213,"_id":2215,"_type":13,"title":2216,"_source":15,"_file":2217,"_stem":2218,"_extension":18},"/en-us/blog/pull-based-kubernetes-deployments-coming-to-gitlab-free-tier",{"title":2201,"description":2202,"ogTitle":2201,"ogDescription":2202,"noIndex":6,"ogImage":1240,"ogUrl":2203,"ogSiteName":675,"ogType":676,"canonicalUrls":2203,"schema":2204},"Pull-based GitOps moving to GitLab Free tier","Learn how this change provides organizations increased flexibility, security, scalability, and automation in cloud-native environments.","https://about.gitlab.com/blog/pull-based-kubernetes-deployments-coming-to-gitlab-free-tier","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Pull-based GitOps moving to GitLab Free tier\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Sandra Gittlen\"},{\"@type\":\"Person\",\"name\":\"Lauren Minning\"}],\n        \"datePublished\": \"2022-05-18\",\n      }",{"title":2201,"description":2202,"authors":2206,"heroImage":1240,"date":2209,"body":2210,"category":1004,"tags":2211},[2207,2208],"Sandra Gittlen","Lauren Minning","2022-05-18","\n\nGitLab will include support for pull-based deployment in the platform’s Free tier in an upcoming release, which will provide users increased flexibility, security, scalability, and automation in cloud-native environments. With pull-based deployment, DevOps teams can use the [GitLab agent for Kubernetes](/blog/introducing-the-gitlab-kubernetes-agent/) to automatically identify and enact application changes. \n\n“DevOps teams at all levels benefit from utilizing GitOps strategies such as pull-based deployment in their cloud-native environments. By offering this feature in GitLab’s Free tier, we can introduce more organizations to the power and utility of this secure and scalable functionality,” says [Viktor Nagy](https://gitlab.com/nagyv-gitlab), product manager of GitLab’s Configure Group.\n\nAs an open-core company, GitLab is happy to contribute to the GitOps community and enable the adoption of best practices in the industry.\n\n## What is pull-based deployment?\n\nPull-based and push-based deployment are [two main approaches to GitOps](/topics/gitops/), an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and [CI/CD](/topics/ci-cd/) tooling, and applies them to infrastructure automation. \n\nGitOps enables operations teams to [move as quickly as their application development counterparts](/blog/gitops-done-3-ways/) by making use of automation and scalability, without sacrificing security. \n\nWhile push-based, or agentless, deployment relies on a CI/CD tool to push changes to the infrastructure environment, pull-based deployment uses an agent installed in a cluster to pull changes whenever there is a deviation from the desired configuration. In the pull-based approach, deployment targets are limited to Kubernetes and an agent must be installed in each Kubernetes cluster.\n\n“As long as the GitLab agent for Kubernetes on your infrastructure has the necessary access rights in your cluster, you can configure everything automatically, reducing the DevOps workload and the opportunity to introduce errors,” Nagy says.\n\n## Pull-based deployment vs. push-based deployment\n\nPush-based deployment and pull-based deployment each have their pros and cons. Here is a list of the advantages and disadvantages of each GitOps practice:\n\nPush-based deployment pros:\n- ease of use\n- well-known as part of CI/CD\n- more flexible, as deployment targets can be on physical servers or virtual containers, not restricted to Kubernetes clusters \n\nPush-based deployment cons:\n- requires organizations to open their firewall to a cluster and grant admin access to external CI/CD\n- requires organizations to adjust their CI/CD pipelines when they introduce new environments\n\nPull-based deployment pros:\n- secure infrastructure - no need to open your firewall or grant admin access externally\n- changes can be automatically detected and applied without human intervention\neasier scaling of identical clusters\n\nPull-based deployment cons:\n- agent needs to be installed in every cluster\n- limited to Kubernetes only\n\n## How pull-based deployment impacts the Free-tier experience\n\nIncluding support for pull-based deployments in GitLab’s Free tier provides a tremendous competitive advantage for smaller organizations as they can now apply automation in a safe and scalable manner to their cloud-native infrastructure, including virtual containers and clusters. And, for organizations that are trying to get started quickly by minimizing the number of tools in their infrastructure ecosystem, this functionality is included in One DevOps Platform, not as a point solution. \n\n“DevOps teams don’t have to continuously write code for new infrastructure elements – they can write the code once, within a single DevOps platform, and have the agent automatically find it, pull it, and apply it, as well as configuration changes,” Nagy says. “Also, with the availability of pull-based deployment in this introductory tier, newcomers to GitLab will immediately be able to modernize application development and reduce the security risk associated with configuring such infrastructure.”\n\n_This blog post contains information related to upcoming products, features, and functionality. It is important to note that the information presented is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned in this blog post and linked pages are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc._\n\n\n\n\n\n\n",[2212,9,855,727,539],"DevOps platform",{"slug":2214,"featured":6,"template":688},"pull-based-kubernetes-deployments-coming-to-gitlab-free-tier","content:en-us:blog:pull-based-kubernetes-deployments-coming-to-gitlab-free-tier.yml","Pull Based Kubernetes Deployments Coming To Gitlab Free Tier","en-us/blog/pull-based-kubernetes-deployments-coming-to-gitlab-free-tier.yml","en-us/blog/pull-based-kubernetes-deployments-coming-to-gitlab-free-tier",{"_path":2220,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2221,"content":2226,"config":2231,"_id":2233,"_type":13,"title":2234,"_source":15,"_file":2235,"_stem":2236,"_extension":18},"/en-us/blog/running-a-consistent-serverless-platform",{"title":2222,"description":2223,"ogTitle":2222,"ogDescription":2223,"noIndex":6,"ogImage":1641,"ogUrl":2224,"ogSiteName":675,"ogType":676,"canonicalUrls":2224,"schema":2225},"Run a consistent serverless platform with GitLab and Knative","Portability of your serverless platform is now easy with GitLab and Knative.","https://about.gitlab.com/blog/running-a-consistent-serverless-platform","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Run a consistent serverless platform with GitLab and Knative\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Daniel Gruesso\"}],\n        \"datePublished\": \"2019-05-02\",\n      }",{"title":2222,"description":2223,"authors":2227,"heroImage":1641,"date":1363,"body":2229,"category":300,"tags":2230},[2228],"Daniel Gruesso","\nThis past April, [Cloud Run](https://cloud.google.com/run/) was announced at Google Cloud Next. As a Google Cloud partner, GitLab had the opportunity to participate and demo our integration during the talk titled, \"[Run a consistent serverless platform anywhere with Kubernetes and Knative](https://youtu.be/lb_bRRAgEyc?t=1100).\"\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/lb_bRRAgEyc?start=1100\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nJust as Kubernetes has become the de facto default platform for running containers, Knative is shaping up to become the answer for running [serverless](/topics/serverless/) workloads in Kubernetes. Cloud Run brings all the benefits of Knative in a fully managed service or as an add-on to your Kubernetes cluster (called “Cloud Run on GKE”), abstracting developers from the complexities of deploying Kubernetes, Knative, and managing a cluster. This empowers developers to focus on adding value vs having to deploy and manage infrastructure.\n\nAt GitLab we believe in the power of open source and adopted Kubernetes and Knative from early on. During the talk, we demoed how GitLab enables operators to deploy Knative with ease so that developers can start deploying Functions-as-a-service (FaaS) or serverless applications using GitLab’s built-in features. GitLab also provides the configured Istio-Ingress endpoints automatically, which operators can then use to configure DNS for their domain, as well as providing the option to bind the domain to the ingress endpoint (via ConfigMap) so that the serving controller can configure the routes. This is all done with a single click.\n\nAfter provisioning your project with the required [serverless templates](https://docs.gitlab.com/ee/update/removals.html), GitLab will automatically build and deploy your application or function as a Knative service, provide you with the endpoint where the service is provisioned, and display load/invocation metrics for your function.\n\n![GitLab Serverless](https://docs.gitlab.com/ee/update/removals.html){: .shadow.small.center.wrap-text}\n\nWhile it’s still early on, we are very excited to partner with both Google Cloud and the Knative community to bring all this awesome functionality to the GitLab community.\n\n{::options parse_block_html=\"true\" /}\n\n\u003Ci class=\"fab fa-gitlab\" style=\"color:rgb(107,79,187); font-size:.85em\" aria-hidden=\"true\">\u003C/i>&nbsp;&nbsp;\nLearn more about [GitLab Serverless](https://docs.gitlab.com/ee/user/project/clusters/serverless)\n&nbsp;&nbsp;\u003Ci class=\"fab fa-gitlab\" style=\"color:rgb(107,79,187); font-size:.85em\" aria-hidden=\"true\">\u003C/i>\n{: .alert .alert-webcast}\n\n{::options parse_block_html=\"true\" /}\n\n\u003Ci class=\"fab fa-gitlab\" style=\"color:rgb(107,79,187); font-size:.85em\" aria-hidden=\"true\">\u003C/i>&nbsp;&nbsp;\nLearn more about [Cloud Run](http://cloud.google.com/run)\n&nbsp;&nbsp;\u003Ci class=\"fab fa-gitlab\" style=\"color:rgb(107,79,187); font-size:.85em\" aria-hidden=\"true\">\u003C/i>\n{: .alert .alert-webcast}\n",[727,278,1150,1149,9],{"slug":2232,"featured":6,"template":688},"running-a-consistent-serverless-platform","content:en-us:blog:running-a-consistent-serverless-platform.yml","Running A Consistent Serverless Platform","en-us/blog/running-a-consistent-serverless-platform.yml","en-us/blog/running-a-consistent-serverless-platform",{"_path":2238,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2239,"content":2244,"config":2251,"_id":2253,"_type":13,"title":2254,"_source":15,"_file":2255,"_stem":2256,"_extension":18},"/en-us/blog/secure-containers-devops",{"title":2240,"description":2241,"ogTitle":2240,"ogDescription":2241,"noIndex":6,"ogImage":1057,"ogUrl":2242,"ogSiteName":675,"ogType":676,"canonicalUrls":2242,"schema":2243},"A shift left strategy for the cloud","Protect your software in the cloud by bringing vulnerability testing closer to remediation.","https://about.gitlab.com/blog/secure-containers-devops","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"A shift left strategy for the cloud\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Cindy Blake\"},{\"@type\":\"Person\",\"name\":\"Vanessa Wegner\"}],\n        \"datePublished\": \"2019-05-03\",\n      }",{"title":2240,"description":2241,"authors":2245,"heroImage":1057,"date":2248,"body":2249,"category":790,"tags":2250},[2246,2247],"Cindy Blake","Vanessa Wegner","2019-05-03","\n\nBusinesses continually adopt new technologies to become more efficient and\neffective. This move toward efficiency in IT has brought a “shift left” to\n[application security](/topics/devsecops/) testing. Methodologies like DevOps and Agile work with iterative\nand [MVP](https://www.agilealliance.org/glossary/mvp/) states, meaning that apps are constantly updating and constantly need\ntesting and retesting – sometimes daily or multiple times per day.\n\n[Serverless](/topics/serverless/), cloud native, containers, and Kubernetes are changing how apps are\ndeployed and managed. This has expanded the attack surface in the form of new\nlayers of complexity and more settings and updates to manage, which also means\nmore room for manual error. In a container, this includes the image, registry,\nand east-west traffic, while in Kubernetes, this includes access and\nauthentication, runtime resources, and network policies. Traffic between apps\nin a container does not cross perimeter network security, but should still be\nmonitored for malicious traffic between apps and the resources they use.\n\n## Your cloud-based ecosystem doesn’t provide comprehensive security\n\nCloud providers, orchestrators, and other partners don’t provide a full\nspectrum of security capabilities out of the box – even with their help, your\nteam must create and maintain their own security policies and continuously\nmonitor your ecosystem for any unusual or malicious activity. While network\nsegmentation and perimeter security for your guest VMs or containers might be\navailable, your engineer will typically need to configure that.\n\nThe figure below outlines the responsibilities of cloud providers, security\nvendors, and end-users, across apps, hosts, networks, and foundation services.\nThe responsibilities in purple and orange are _primarily_ the responsibility of\nthe cloud provider and security vendors, but our engineers tell us that they\nare involved in every cell of this chart in some way.\n\n![Security responsibilities in your cloud ecosystem](https://about.gitlab.com/images/blogimages/container-security-responsibilities.png){: .shadow.medium.center}\n\n## Treat security as a critical outcome, not a department\n\nSecurity should be top of mind for everyone in the business, not just your\nsecurity team. While the complexity of your infrastructure builds, new tools\nand capabilities give opportunity for everyone to contribute to the security\neffort. Here are a few areas of change that will help you rally the masses in\ndefense of your business:\n- Cloud providers are beginning to offer more security capabilities.\n- System updates – and staying current with your patches – could very much save the day.\n- Automating your processes could make or break the business. While guidelines\nfor humans are necessary, you need automation to abstract the complexity of\nyour infrastructure. Soon, automated capabilities to translate plain-language\npolicies into the growing multitude of settings will make their way into the\nmarket.\n\n### Take a Zero Trust approach to your applications\n\nThe foundational idea of [Zero Trust](/blog/evolution-of-zero-trust/) is simple: Trust nothing and always assume\nthe bad guys are trying to get in. It’s time to take your security beyond the\ntraditional network-perimeter approach and extend Zero Trust from data,\nnetwork, and endpoints to your application infrastructure. It also wouldn’t\nhurt to protect the software development lifecycle (SDLC) to ensure the integrity of your software is not\ncompromised, given all of the automation in a typical DevOps toolchain.\n\n## Three key principles to secure next-generation IT\n\n### 1. Enhance your security practices with DevSecOps\n\nAs you iterate on software, dovetail security into each iteration through [DevSecOps](/solutions/security-compliance/) – not simply\nto test security for the entire history of the app, but to test the impact of\neach change made in every update. Retrofitting your apps and software for\nsecure functionality will slow down your release cycle. Marrying the two\nwill save both time in the present, and heartache in the future when\nyour software is inevitably attacked. Unfortunately, traditional methods don’t\nfit the bill when it comes to DevOps; it’s too expensive and too robust to\nscan every piece of code manually. With a [shift left](/topics/ci-cd/shift-left-devops/) strategy, security scans can be automated into every\ncode commit – meaning you no longer need to choose between risk, cost, and\nagility.\n\n[Arm your developers to resolve vulnerabilities early in the SDLC, leaving your\nsecurity team free to focus on exceptions](/blog/speed-secure-software-delivery-devsecops/).\nWith GitLab, a [review app](https://docs.gitlab.com/ee/ci/review_apps/) is spun up at code commit – before the\nindividual developer’s code is merged to the master. The developer can see and\ntest the working application, with test results highlighting the impact of the\ncode change. [Dynamic application security testing](https://docs.gitlab.com/ee/user/application_security/dast/) (DAST)\ncan then scan the review app, and the developer can quickly iterate to resolve\nvulnerabilities reported in their pipeline report.\n\n![View dynamic application security testing within GitLab.](https://about.gitlab.com/images/blogimages/dast-example.png){: .shadow.medium.center}\n[Learn more about DAST in GitLab's product documentation.](https://docs.gitlab.com/ee/user/application_security/dast/)\n\n### 2. Secure horizontally before digging deeper\n\nWe often fall into the trap of going deep on a single aspect of security –\nleaving other obvious aspects completely exposed. For instance, you may\nuse a powerful scanner for your mission-critical apps but neglect to scan\nothers; or, you may choose to save resources by not scanning your third-party\ncode, with the assumption that its widespread use means it’s checked out.\n\nAvoid focusing so much on application security that you forget about container\nscanning, orchestrators, and access management.\n\n### 3. Simplicity and integration wins\n\nThe key is to bring security scanning to the development process by having a\ntool like GitLab that allows developers to stay within the same platform or\ninterface to both code and scan. Making the process easier increases the\nlikelihood that it’ll get done – and making the process automatic within the\ntool ensures that it will happen every time there is a code update.\n\nReady to deliver secure apps with every update? [Just commit.](/solutions/security-compliance/)\n{: .alert .alert-gitlab-purple .text-center}\n\nCover image by [Frank McKenna](https://unsplash.com/@frankiefoto) on [Unsplash](https://unsplash.com/photos/tjX_sniNzgQ?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\n{: .note}\n",[727,685,9,855],{"slug":2252,"featured":6,"template":688},"secure-containers-devops","content:en-us:blog:secure-containers-devops.yml","Secure Containers Devops","en-us/blog/secure-containers-devops.yml","en-us/blog/secure-containers-devops",{"_path":2258,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2259,"content":2265,"config":2271,"_id":2273,"_type":13,"title":2274,"_source":15,"_file":2275,"_stem":2276,"_extension":18},"/en-us/blog/simple-kubernetes-management-with-gitlab",{"title":2260,"description":2261,"ogTitle":2260,"ogDescription":2261,"noIndex":6,"ogImage":2262,"ogUrl":2263,"ogSiteName":675,"ogType":676,"canonicalUrls":2263,"schema":2264},"Simple Kubernetes management with GitLab","Follow our tutorial to provision a Kubernetes cluster and manage it with IAC using Terraform and Helm in 20 minutes or less.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749670037/Blog/Hero%20Images/auto-deploy-google-cloud.jpg","https://about.gitlab.com/blog/simple-kubernetes-management-with-gitlab","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Simple Kubernetes management with GitLab\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Noah Ing\"}],\n        \"datePublished\": \"2022-11-15\",\n      }",{"title":2260,"description":2261,"authors":2266,"heroImage":2262,"date":2268,"body":2269,"category":683,"tags":2270},[2267],"Noah Ing","2022-11-15","Kubernetes can be very complex and has dozens of tutorials out there on how\nto provision and manage a cluster. This tutorial aims to provide a simple,\nlightweight solution to provision a Kubernetes cluster and manage it with\ninfrastructure as code (IaC) using Terraform and Helm in 20 minutes or less.\n\n\n**The final product of this tutorial will be two IaC repositories with fully\nfunctional CI/CD pipelines:**\n\n\n1.\n[gitlab-terraform-k8s](https://gitlab.com/gitlab-org/configure/examples/gitlab-terraform-eks)\n- A single source of truth to provision, configure, and manage your\nKubernetes infrastructure using Terraform\n\n1.\n[cluster-management](https://gitlab.com/gitlab-org/project-templates/cluster-management)\n- A single source of truth to define the desired state of your Kubernetes\ncluster using the GitLab Agent for Kubernetes and Helm\n\n\n![Final\nProduct](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/final-product.png){:\n.shadow}\n\n\n\n### Prerequisites\n\n- AWS or GCP account with permissions to provision resources\n\n- GitLab account \n\n- Access to a GitLab Runner\n\n- 20 minutes\n\n\n### An overview of this tutorial is as follows:\n\n\n1. Set up the GitLab Terraform Kubernetes Template 🏗️\n\n2. Register the GitLab Agent 🕵️\n\n3. Add in Cloud Credentials ☁️🔑\n\n4. Set up the Kubernetes Cluster Management Template 🚧\n\n5. Enjoy your Kubernetes Cluster completely managed in code! 👏\n\n\n## Set up the GitLab Terraform Kubernetes Template\n\n\nStart by importing the example project by URL -\n[https://gitlab.com/projects/new#import_project](https://gitlab.com/projects/new#import_project)\n\n\nTo import the project:\n\n\n1. In GitLab, on the top bar, select **Main menu > Projects > View all\nprojects**.\n\n2. On the right of the page, select **New project**.\n\n3. Select **Import project**.\n\n4. Select **Repository by URL**.\n\n5. For the Git repository URL:\n\n- [GCP Google Kubernetes\nEngine](https://cloud.google.com/kubernetes-engine):\nhttps://gitlab.com/gitlab-org/configure/examples/gitlab-terraform-gke.git\n\n- [AWS Elastic Kubernetes Service](https://aws.amazon.com/eks/):\nhttps://gitlab.com/gitlab-org/configure/examples/gitlab-terraform-eks.git\n\n6. Complete the fields and select **Create project**.\n\n\n## Register the GitLab Agent\n\n\nWith your newly created **gitlab-terraform-k8s** repo, create a GitLab Agent\nfor Kubernetes:\n\n\n1. On the left sidebar, select **Infrastructure > Kubernetes clusters**.\nSelect **Connect a cluster (agent).**\n\n2. From the **Select an agent** dropdown list, select **eks-agent/gke-agent\nand select **Register an agent**.\n\n3. GitLab generates a registration token for the agent. **Securely store\nthis secret token, as you will need it later.**\n\n4. GitLab provides an address for the agent server (KAS). Securely store\nthis as you will also need it later.\n\n5. Add this to the\n**gitlab-terraform-eks/.gitlab/agents/eks-agent/config.yaml** in order to\nallow the GitLab Agent to have access to your entire group.\n\n\n```yaml\n\nci_access:\n  groups:\n    - id: your-namespace-here\n```\n\n\n![Register GitLab\nAgent](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/register-gitlab-agent.png){:\n.shadow}\n\n\n\n## Add in your Cloud Credentials to CI/CD variables\n\n\n### [AWS EKS](https://aws.amazon.com/eks/)\n\n\nOn the left sidebar, select **Settings > CI/CD. Expand Variables**.\n\n1. Set the variable **AWS_ACCESS_KEY_ID** to your AWS access key ID.\n\n2. Set the variable **AWS_SECRET_ACCESS_KEY** to your AWS secret access key.\n\n3. Set the variable **TF_VAR_agent_token** to the agent token displayed in\nthe previous task.\n\n4. Set the variable **TF_VAR_kas_address** to the agent server address\ndisplayed in the previous task.\n\n\n![Add in CI/CD\nvariables](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/cicd-variables.png){:\n.shadow}\n\n\n\n### [GCP GKE](https://cloud.google.com/kubernetes-engine)\n\n\n1. To authenticate GCP with GitLab, create a GCP service account with the\nfollowing roles: **Compute Network Viewer, Kubernetes Engine Admin, Service\nAccount User, and Service Account Admin**. Both User and Admin service\naccounts are necessary. The User role impersonates the default service\naccount when creating the node pool. The Admin role creates a service\naccount in the kube-system namespace.\n\n2. **Download the JSON file** with the service account key you created in\nthe previous step.\n\n3. On your computer, encode the JSON file to base64 (replace\n/path/to/sa-key.json to the path to your key):\n\n\n```\n\nbase64 -i /path/to/sa-key.json | tr -d\n\n```\n\n\n- Use the output of this command as the **BASE64_GOOGLE_CREDENTIALS**\nenvironment variable in the next step.\n\n\nOn the left sidebar, select **Settings > CI/CD. Expand Variables**.\n\n5. Set the variable **BASE64_GOOGLE_CREDENTIALS** to the base64 encoded JSON\nfile you just created.\n\n6. Set the variable **TF_VAR_gcp_project** to your GCP’s project name.\n\n7. Set the variable **TF_VAR_agent_token** to the agent token displayed in\nthe previous task.\n\n8. Set the variable **TF_VAR_kas_address** to the agent server address\ndisplayed in the previous task.\n\n\n## Run GitLab CI to deploy your Kubernetes cluster!\n\n\n![Deploy Kubernetes\ncluster](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/pipeline-view.png){:\n.shadow}\n\n\nWhen successfully completed, view the cluster in the AWS/GCP console!\n\n\n![AWS\nEKS](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/aws-eks.png){:\n.shadow}\n\n\n### You are halfway done! 👏 Keep it up!\n\n\n## Set up the Kubernetes Cluster Management Project\n\n\nCreate a project from the cluster management project template -\n[https://gitlab.com/projects/new#create_from_template](https://gitlab.com/projects/new#create_from_template)\n\n\n1. In GitLab, on the top bar, select **Main menu > Projects > View all\nprojects**.\n\n2. On the right of the page, select **New project**.\n\n3. Select **Create from template**.\n\n4. From the list of templates, next to **GitLab Cluster Management**, select\n**Use template**.\n\n5. Enter the project details. Ensure this project is created in the same\nnamespace as the gitlab-terraform-k8s project.\n\n6. Select **Create project**.\n\n7. Once the project is created on the left sidebar, select **Settings >\nCI/CD. Expand Variables**.\n\n8. Set the variable KUBE_CONTEXT to point to the GitLab Agent. For example,\n`noah-ing-demos/infrastructure/gitlab-terraform-eks:eks-agent`.\n\n\n![Set Kube\nContext](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/kube-config.png){:\n.shadow}\n\n\n\n- **Uncomment the applications you'd like to be installed** into your\nKubernetes cluster in the **helmfile.yaml**. In this instance I chose\ningress, cert-manager, prometheus, and Vault. \n\n\n![Uncomment Applications in\nhelmfile](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/helmfile.png){:\n.shadow}\n\n\nThat will trigger your **CI/CD pipeline** and it should look like this.\n\n\n![Cluster Management\nCI/CD](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/cluster-management-cicd.png){:\n.shadow}\n\n\nOnce completed, **go to the AWS/GCP console** and check out all the deployed\nresources!\n\n\n![Deployed EKS\napplications](https://about.gitlab.com/images/blogimages/2022-11-11-simple-kubernetes-management-with-gitlab/deployed-eks-applications.png){:\n.shadow}\n\n\n### Voila! 🎉\n\n\n## Enjoy your Kubernetes cluster completely defined in code! 👏👏👏\n\n\nNow with these two repositories you can **manage a Kubernetes cluster\nentirely through code**:\n\n\n- For managing the Kubernetes cluster's infrastructure and configuring its\nresources you can make changes to the\n[gitlab-terraform-eks](https://gitlab.com/gitlab-org/configure/examples/gitlab-terraform-eks)\nrepository you have setup. This project has a **Terraform CI/CD pipeline**\nthat will allow you to **review, provision, configure, and manage your\nKubernetes** infrastructure with ease.\n\n\n- For managing the desired state of the Kubernetes cluster, the\n[cluster-management](https://gitlab.com/gitlab-org/project-templates/cluster-management)\nrepository has a **GitLab Agent** set up and will **deploy any Kubernetes\nobjects defined in the helm files**.\n\n\n➡️ Bonus: If you'd like to deploy your own application to the Kubernetes\ncluster, then add to your **cluster-management** `helmfile` and see the\nGitLab Agent for Kubernetes roll it out with ease!\n\n\n\n## References\n\n- [Create a New GKE\nCluster](https://docs.gitlab.com/ee/user/infrastructure/clusters/connect/new_gke_cluster.html)\n\n- [Create a New EKS\nCluster](https://docs.gitlab.com/ee/user/infrastructure/clusters/connect/new_eks_cluster.html)\n\n- [Cluster Management\nProject](https://docs.gitlab.com/ee/user/clusters/management_project.html)\n\n\n\n## Related posts\n\n- [The ultimate guide to GitOps with\nGitLab](https://about.gitlab.com/blog/the-ultimate-guide-to-gitops-with-gitlab/)\n\n- [GitOps with GitLab: Infrastructure provisioning with GitLab and\nTerraform](https://about.gitlab.com/blog/gitops-with-gitlab-infrastructure-provisioning/)\n\n- [GitOps with GitLab: Connect with a Kubernetes\ncluster](https://about.gitlab.com/blog/gitops-with-gitlab-connecting-the-cluster/)\n",[748,9,539,814,815,685],{"slug":2272,"featured":6,"template":688},"simple-kubernetes-management-with-gitlab","content:en-us:blog:simple-kubernetes-management-with-gitlab.yml","Simple Kubernetes Management With Gitlab","en-us/blog/simple-kubernetes-management-with-gitlab.yml","en-us/blog/simple-kubernetes-management-with-gitlab",{"_path":2278,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2279,"content":2285,"config":2290,"_id":2292,"_type":13,"title":2293,"_source":15,"_file":2294,"_stem":2295,"_extension":18},"/en-us/blog/simplify-your-cloud-account-management-for-kubernetes-access",{"title":2280,"description":2281,"ogTitle":2280,"ogDescription":2281,"noIndex":6,"ogImage":2282,"ogUrl":2283,"ogSiteName":675,"ogType":676,"canonicalUrls":2283,"schema":2284},"Simplify your cloud account management for Kubernetes access","In this tutorial, learn how to use the GitLab agent for Kubernetes and its user impersonation features for secure cluster access.\n\n","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749670563/Blog/Hero%20Images/cloudcomputing.jpg","https://about.gitlab.com/blog/simplify-your-cloud-account-management-for-kubernetes-access","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Simplify your cloud account management for Kubernetes access\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2024-03-19\",\n      }",{"title":2280,"description":2281,"authors":2286,"heroImage":2282,"date":2287,"body":2288,"category":855,"tags":2289},[765],"2024-03-19","We hear you: Managing cloud accounts is risky, tedious, and time-consuming,\nbut also a must-have in many situations. You might run your Kubernetes\nclusters with one of the hyperclouds, and your engineers need to access at\nleast the non-production cluster to troubleshoot issues quickly and\nefficiently. Sometimes, you also need to give special, temporary access to\nengineers on a production cluster.\n\n\nYou have also told us that access requests might not come very often, but\nwhen they do, they are urgent, and given the high security requirements\naround the process, they can take close to a week to fulfill. \n\n\nBy giving access to your cloud infrastructure, you automatically expose\nyourself to risks. As a result, it's a best practice to restrict access only\nto the resources the given user must have access to. However, cloud identity\nand access management (IAM) is complex by nature. \n\n\nIf you are using Kubernetes and you need to give access specifically to your\nclusters only, GitLab can help. Your user will be able to identify with your\ncluster, so you can configure the Kubernetes role-based access controls\n(RBAC) to restrict their access within the cluster. With GitLab, and\nspecifically the GitLab agent for Kubernetes, you can start at the last step\nand focus only on the RBAC aspect.\n\n\n## What is the GitLab agent for Kubernetes?\n\n\nThe GitLab agent for Kubernetes is a set of GitLab components that allows a\npermanent, bi-directional streaming channel between your GitLab instance and\nyour Kubernetes cluster (one agent per cluster). Once the agent connection\nis configured, you can share it across projects and groups within your\nGitLab instance, allowing a single agent to serve all the access needs of a\ncluster.\n\n\nCurrently, the agent has several features to simplify your Kubernetes\nmanagement tasks:\n\n\n* [Integrates with GitLab\nCI/CD](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html)\nfor push-based deployments or regular cluster management jobs. The\nintegration exposes a Kubernetes context per available agent in the Runner\nenvironment, and any tool that can take a context as an input (e.g. kubectl\nor helm CLI) can reach your cluster from the CI/CD jobs.\n\n* Integrates with the GitLab GUI, specifically the environment pages. Users\ncan configure [an environment to show the Kubernetes\nresources](https://docs.gitlab.com/ee/ci/environments/kubernetes_dashboard.html)\navailable in a specific namespace, and even set up a Flux resource to track\nthe reconciliation of your applications.\n\n* Enables users to use the GitLab-managed channel to [connect to the cluster\nfrom their local\nlaptop](https://docs.gitlab.com/ee/user/clusters/agent/user_access.html#access-a-cluster-with-the-kubernetes-api),\nwithout giving them cloud-specific Kubernetes access tokens.\n\n* Supports [Flux GitRepository\nreconciliations](https://docs.gitlab.com/ee/user/clusters/agent/gitops.html#immediate-git-repository-reconciliation)\nby triggering a reconciliation automatically on new commits in repositories\nthe agent can access.\n\n* [Runs operational container\nscans](https://docs.gitlab.com/ee/user/clusters/agent/vulnerabilities.html)\nand shows the reports in the GitLab UI.\n\n* Enables you to enrich the [remote\ndevelopment](https://docs.gitlab.com/ee/user/project/remote_development/)\noffering with [workspaces](https://docs.gitlab.com/ee/user/workspace/).\n\n\n> Try simplifying your cloud account management for Kubernetes access today\nwith [a free trial of GitLab Ultimate](https://gitlab.com/-/trials/new).\n\n\n## The agent and access management\n\n\nThe GitLab agent for Kubernetes, which is available for GitLab Ultimate and\nPremium, impersonates various GitLab-specific users when it acts on behalf\nof GitLab in the cluster.\n\n\n* For the GitLab CI/CD integration, the agent impersonates the CI job as the\nuser, and enriches the user with group specific metadata that describe the\nproject and the group.\n\n\n* For the environment and local connections, the agent impersonates the\nGitLab user using the connection, and similarly to the CI/CD integration,\nthe impersonated Kubernetes user is enriched with group specific metadata,\nlike roles in configured groups.\n\n\nAs this article is about using the agent instead of cloud accounts for\ncluster access, let’s focus on the environment and local connections setup.\n\n\n## An example setup\n\n\nTo offer a realistic setup, let’s assume that in our GitLab instance we have\nthe following groups and projects:\n\n\n* `/app-dev-group/team-a/service-1`\n\n* `/app-dev-group/team-a/service-2`\n\n* `/app-dev-group/team-b/service-3`\n\n* `/platform-group/clusters-project`\n\n\nIn the above setup, the agents are registered against the `clusters-project`\nproject and, in addition to other code, the project contains the agent\nconfiguration files:\n\n\n* `.gitlab/agents/dev-cluster/config.yaml`\n\n* `.gitlab/agents/prod-cluster/config.yaml`\n\n\nThe `dev-cluster` and `prod-cluster` directory names are actually the agent\nnames as well, and registered agents and related events can be seen within\nthe projects “Operations/Kubernetes clusters” menu item. The agent offers\nsome minimal features by default, without a configuration file. To benefit\nfrom the user access features and to share the agent connection across\nprojects and groups, a configuration file is required.\n\n\nLet’s assume that we want to configure the agents in the following way:\n\n\n* For the development cluster connection:\n\n    * Everyone with at least developer role in team-a should be able to read-write their team specific namespace `team-a` only.\n    * Everyone with group owner role in team-a should have namespace admin rights on the `team-a` namespace only.\n    * Members of `team-b` should not be able to access the cluster.\n\n* For the production cluster connection:\n\n    * Everyone with at least developer role in team-a should be able to read-only their team specific namespace `team-a` only.\n    * Members of `team-b` should not be able to access the cluster.\n\nFor the development cluster, the above setup requires an agent configuration\nfile in `.gitlab/agents/dev-cluster/config.yaml` as follows:\n\n\n```yaml\n\nuser_access:\n  access_as:\n    user: {}\n  groups:\n    - id: app-dev-group/team-a # group_id=1\n    - id: app-dev-group/team-b # group_id=2\n```\n\n\nIn this code snippet we added the group ID of the specific groups in a\ncomment. We will need these IDs in the following Kubernetes RBAC\ndefinitions:\n\n\n```yaml\n\napiVersion: rbac.authorization.k8s.io/v1\n\nkind: RoleBinding\n\nmetadata:\n  name: team-a-dev-can-edit\n  namespace: team-a\nroleRef:\n  name: edit\n  kind: ClusterRole\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n  - name: gitlab:group_role:1:developer\n    kind: Group\n```\n\n\nand...\n\n\n```yaml\n\napiVersion: rbac.authorization.k8s.io/v1\n\nkind: RoleBinding\n\nmetadata:\n  name: team-a-owner-can-admin\n  namespace: team-a\nroleRef:\n  name: admin\n  kind: ClusterRole\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n  - name: gitlab:group_role:1:owner\n    kind: Group\n```    \n\n\nThe above two code snippets can be applied to the cluster with the GitLab\nFlux integration or manually via `kubectl`. They describe role bindings for\nthe `team-a` group members. It’s important to note that only the groups and\nprojects from the agent configuration file can be targeted as RBAC groups.\nTherefore, the following RBAC will not work as the impersonated user\nresources don’t know about the referenced projects:\n\n\n```yaml\n\napiVersion: rbac.authorization.k8s.io/v1\n\nkind: RoleBinding\n\nmetadata:\n  name: team-a-dev-can-edit\n  namespace: team-a\nroleRef:\n  name: edit\n  kind: ClusterRole\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n  - name: gitlab:project_role:3:developer # app-dev-group/team-a/service-1 project ID is 3\n    kind: Group\n```\n\n\nFor the production cluster we need the same agent configuration under\n`.gitlab/agents/prod-cluster/config.yaml` and the following RBAC\ndefinitions:\n\n\n```yaml\n\napiVersion: rbac.authorization.k8s.io/v1\n\nkind: RoleBinding\n\nmetadata:\n  name: team-a-dev-can-read\n  namespace: team-a\nroleRef:\n  name: view\n  kind: ClusterRole\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n  - name: gitlab:group_role:1:developer\n    kind: Group\n```\n\n\nThese configurations allow project owners to set up the environment pages so\nmembers of `team-a` will be able to see the status of their cluster\nworkloads in real-time and they should be able to access the cluster from\ntheir local computers using their favorite Kubernetes tools.\n\n\n## Explaining the magic\n\n\nIn the previous section, you learned how to set up role bindings for group\nmembers with specific roles. In this section, let's dive into the\nimpersonated user and their attributes.\n\n\nWhile Kubernetes does not have a User or Group resource, its authentication\nand authorization scheme pretends to have it. Users have a username, can\nbelong to groups, and can have other extra attributes.\n\n\nThe impersonated GitLab user carries the `gitlab:username:\u003Cusername>` in the\ncluster. For example, if our imaginary user Béla has the GitLab username\n`bela`, then in the cluster the impersonated user will be called\n`gitlab:username:bela`. This allows targeting of a specific user in the\ncluster.\n\n\nEvery impersonated user belongs to the `gitlab:user` group. Moreover, for\nevery project and group listed in the agent configuration, we check the\ncurrent user’s role and add it as a group. This is more easily understood\nthrough an example, so let’s modify a little bit the agent configuration we\nused above.\n\n\n```yaml\n\nuser_access:\n  access_as:\n    user: {}\n  projects:\n    - id: platform-group/clusters-project # project_id=1\n  groups:\n    - id: app-dev-group/team-a # group_id=1\n    - id: app-dev-group/team-b # group_id=2\n```\n\n\nFor the sake of example, let’s assume the contrived setup that our user Béla\nis a maintainer in the `platform-group/clusters-project` project, is a\ndeveloper in `app-dev-group/team-a` group, and an owner of the\n`app-dev-group/team-a/service-1` project. In this case, the impersonated\nKubernetes user `gitlab:username:bela` will belong to the following groups:\n\n\n* `gitlab:user`\n\n* `gitlab:project_role:1:developer`\n\n* `gitlab:project_role:1:maintainer`\n\n* `gitlab:group_role:1:developer`\n\n\nWhat happens is that we check Béla’s role in every project and group listed\nin the agent configuration, and set up all the roles that Béla has there. As\nBéla is a maintainer in `platform-group/clusters-project` (project ID 1), we\nadd him to both the `gitlab:project_role:1:developer` and\n`gitlab:project_role:1:maintainer` groups. Note as well, that we did not add\nany groups for the `app-dev-group/team-a/service-1` project, only its parent\ngroup that appears in the agent configuration.\n\n\n## Simplifying cluster management\n\n\nSetting up the agent and configuring the cluster as presented above is\neverything you need to model the presented access requirements in the\ncluster. You don’t have to manage cloud accounts or add in-cluster account\nmanagement tools like Dex. The agent for Kubernetes and its user\nimpersonation features can simplify your infrastructure management work.\n\n\nWhen new people join your company, once they become members of the `team-a`\nthey immediately get access to the clusters as configured above. Similarly,\nas someone leaves your company, you just have to remove them from the group\nand their access will be disabled. As we mentioned, the agent supports local\naccess to the clusters, too. As that local access runs through the\nGitLab-side agent component, it will be disabled as well when users are\nremoved from the `team-a` group.\n\n\nSetting up the agent takes around two-to-five minutes per cluster. Setting\nup the required RBAC might take another five minutes. In 10 minutes, users\ncan get controlled access to a cluster, saving days of work and decreasing\nthe risks associated with cloud accounts.\n\n\n## Get started today\n\n\nIf you want to try this approach and allow access to your colleagues to some\nof your clusters without managing cloud accounts, the following\ndocumentation pages should help you to get started:\n\n\n- On self-managed GitLab instances, you might need to [configure the\nGitLab-side component (called\nKAS)](https://docs.gitlab.com/ee/administration/clusters/kas.html) of the\nagent for Kubernetes first.\n\n\n- You can learn more about [all the Kubernetes management features\nhere](https://docs.gitlab.com/ee/user/clusters/agent/), or you can\nimmediately dive in by [installing an\nagent](https://docs.gitlab.com/ee/user/clusters/agent/install/), and\n[granting users access to\nKubernetes](https://docs.gitlab.com/ee/user/clusters/agent/user_access.html).\n\n\n- You’ll likely want to [configure a Kubernetes\ndashboard](https://docs.gitlab.com/ee/ci/environments/kubernetes_dashboard.html)\nfor your deployed application.\n\n\n> Try simplifying your cloud account management for Kubernetes access today\nwith [a free trial of GitLab Ultimate](https://gitlab.com/-/trials/new).\n",[727,855,9,748],{"slug":2291,"featured":90,"template":688},"simplify-your-cloud-account-management-for-kubernetes-access","content:en-us:blog:simplify-your-cloud-account-management-for-kubernetes-access.yml","Simplify Your Cloud Account Management For Kubernetes Access","en-us/blog/simplify-your-cloud-account-management-for-kubernetes-access.yml","en-us/blog/simplify-your-cloud-account-management-for-kubernetes-access",{"_path":2297,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2298,"content":2304,"config":2311,"_id":2313,"_type":13,"title":2314,"_source":15,"_file":2315,"_stem":2316,"_extension":18},"/en-us/blog/stackpoint-gitlab-integration",{"title":2299,"description":2300,"ogTitle":2299,"ogDescription":2300,"noIndex":6,"ogImage":2301,"ogUrl":2302,"ogSiteName":675,"ogType":676,"canonicalUrls":2302,"schema":2303},"GitLab K8s clusters: Backup and trusted charts in 10 min","StackPointCloud partners with GitLab to create a simple, turn-key experience for developers who want to move faster into production with their apps.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749671181/Blog/Hero%20Images/stackpoint-gitlab-integration.png","https://about.gitlab.com/blog/stackpoint-gitlab-integration","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Turn-Key GitLab Enterprise Kubernetes clusters, backup, trusted charts — all in less than 10 minutes\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Matt Baldwin\"}],\n        \"datePublished\": \"2017-07-10\",\n      }",{"title":2305,"description":2300,"authors":2306,"heroImage":2301,"date":2308,"body":2309,"category":300,"tags":2310},"Turn-Key GitLab Enterprise Kubernetes clusters, backup, trusted charts — all in less than 10 minutes",[2307],"Matt Baldwin","2017-07-10","\n\n[Stackpoint.io](https://stackpointcloud.com/) is excited to announce we’ve worked with GitLab to enable an end-to-end turn-key solution that will help developers move even faster from idea to production.\n\n\u003C!-- more -->\n\nStackpoint.io advances the mandate of allowing developers to continue to focus on building product, leaving configuring the tooling to GitLab and Stackpoint.io. With this release, together, users can manage and collaborate on their clusters and ensure Gitlab EE is operating correctly — all in a turn-key, developer-friendly way.\n\nOur Kubernetes cloud management platform now allows you to:\n\n* Build a GitLab EE Kubernetes cluster on the cloud of your choice - Google Compute, AWS, or Azure-in three easy steps.\n* Deploy GitLab EE to an existing Kubernetes cluster.\n* Upgrade your GitLab EE Kubernetes cluster in one click.\n* Set up a Kubernetes backup schedule-store in Google or Amazon, recover anywhere.\n* Get all your operational components, pre-configured, at build or run time-Sysdig for monitoring, Twistlock for security, Elasticsearch with Fluentd and Kibana for logging, and more.\n* Allow your developers quick and easy access to operational tools, trimmed down. For example, they can dive into their cluster’s Prometheus metrics – one click.\n\n![StackPoint integration with GitLab](https://about.gitlab.com/images/blogimages/stackpoint-integration.png)\n\nOur GitLab integration not only allows you to run a self-healing deployment of GitLab EE on Kubernetes, but we’ve also integrated Docker Registry automatically, if you’re running on AWS we set up ELB for you and secure it all with Let’s Encrypt.\n\n## Get started\n\n1. Get a new GitLab EE Kubernetes cluster up, running, and configured for production within 10 minutes.\n\n2. Deploy your first app to Kubernetes using GitLab.\n\n3. Schedule your protection of your cluster.\n\nGive it a shot [now](https://stackpoint.io/#/clusters/new?provider=aws&solution=gitlab_ee).\n",[9,1127,232],{"slug":2312,"featured":6,"template":688},"stackpoint-gitlab-integration","content:en-us:blog:stackpoint-gitlab-integration.yml","Stackpoint Gitlab Integration","en-us/blog/stackpoint-gitlab-integration.yml","en-us/blog/stackpoint-gitlab-integration",{"_path":2318,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2319,"content":2324,"config":2329,"_id":2331,"_type":13,"title":2332,"_source":15,"_file":2333,"_stem":2334,"_extension":18},"/en-us/blog/stackpoint-webcast-recording-highlights",{"title":2320,"description":2321,"ogTitle":2320,"ogDescription":2321,"noIndex":6,"ogImage":2301,"ogUrl":2322,"ogSiteName":675,"ogType":676,"canonicalUrls":2322,"schema":2323},"Demo: Turn-key Kubernetes with StackPoint.io","StackPointCloud CEO Matt Baldwin shows how GitLab users can now go even faster from idea to production with an integration that takes the pain out of building Kubernetes clusters.","https://about.gitlab.com/blog/stackpoint-webcast-recording-highlights","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Demo: Turn-key Kubernetes with StackPoint.io\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Rebecca Dodd\"}],\n        \"datePublished\": \"2017-08-03\",\n      }",{"title":2320,"description":2321,"authors":2325,"heroImage":2301,"date":2326,"body":2327,"category":300,"tags":2328},[940],"2017-08-03","\n\nStackPointCloud [partnered with us](/blog/stackpoint-gitlab-integration/) to bring you an end-to-end, turn-key Kubernetes solution, speeding up the process from idea to production. Watch the turn-key piece in action in our recent webcast.\n\n\u003C!-- more -->\n\nKubernetes allows you to manage an application across different resources and clouds, enabling self-healing (so if a container dies, it will be rescheduled on another host) and scaling up on demand, or scaling down as needed to save costs. With a host of benefits, it's no surprise that there's a strong and active community around Kubernetes, but for some teams, the time and effort required to install and configure a Kubernetes cluster could be better spent elsewhere.\n\nWhile GitLab covers every step of the software development lifecycle, we do require you to have a Kubernetes cluster up and running before you begin to use it, which is where some users get stuck. Watch the video below to see how our friends at [StackPoint.io](https://stackpointcloud.com/) have worked with us on a solution that does the hard work for you, to \"close the last mile\" in under 10 minutes. The demo starts at 12:02.\n\n\u003Ciframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/wu2AIcwjeQ8\" frameborder=\"0\" allowfullscreen>\u003C/iframe>\n\nWant to give it a try for yourself? [Launch a Kubernetes cluster with GitLab in one click](https://goo.gl/qnSp3N).\n",[9,232],{"slug":2330,"featured":6,"template":688},"stackpoint-webcast-recording-highlights","content:en-us:blog:stackpoint-webcast-recording-highlights.yml","Stackpoint Webcast Recording Highlights","en-us/blog/stackpoint-webcast-recording-highlights.yml","en-us/blog/stackpoint-webcast-recording-highlights",{"_path":2336,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2337,"content":2343,"config":2348,"_id":2350,"_type":13,"title":2351,"_source":15,"_file":2352,"_stem":2353,"_extension":18},"/en-us/blog/the-kubecon-summary-from-a-product-perspective",{"title":2338,"description":2339,"ogTitle":2338,"ogDescription":2339,"noIndex":6,"ogImage":2340,"ogUrl":2341,"ogSiteName":675,"ogType":676,"canonicalUrls":2341,"schema":2342},"How what we learned at KubeCon EU 2022 will impact our product roadmaps","Platform integrations and secrets management are among our product team's primary takeaways. Find out why.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1750097776/Blog/Hero%20Images/Blog/Hero%20Images/2_2.png_1750097776369.png","https://about.gitlab.com/blog/the-kubecon-summary-from-a-product-perspective","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How what we learned at KubeCon EU 2022 will impact our product roadmaps\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2022-05-31\",\n      }",{"title":2338,"description":2339,"authors":2344,"heroImage":2340,"date":2345,"body":2346,"category":1747,"tags":2347},[765],"2022-05-31","\nAfter two years of only virtual KubeCon events, the GitLab product team was excited to participate in and meet colleagues, partners, and more from our industry at KubeCon EU 2022, held in Valencia, Spain. We were present with four product leaders, a software developer, and a UX researcher. This post summarizes our primary takeaways from the conference, an experience that will affect our roadmaps.\n\nWe will discuss the following topics:\n\n- Internal platforms and GitOps\n- Secrets management\n- Infrastructure integrations\n- WebAssembly a.k.a. WASM\n\nThere were 32 topic types and several 0-day events at KubeCon. Many talks focused on a few tools. Many Cloud Native Computing Foundation ([CNCF](https://www.cncf.io/)) projects had their community meetings during these days. Some talks were given IRL, and others were broadcast virtually with live Q&A. There were a variety of topics and approaches. There were many talks about the various aspects of cluster management, too. However, we left this topic out on purpose because at GitLab we want to focus on the software developers and provide one DevOps platform to support their work. Cluster management is one step away from this focus. Still, we noticed some remarkable patterns as highlighted by the four elements of our list.\n\n> You’re invited! Join us on June 23rd for the [GitLab 15 launch event](https://page.gitlab.com/fifteen) with DevOps guru Gene Kim and several GitLab leaders. They’ll show you what they see for the future of DevOps and The One DevOps Platform.\n\n## Internal platforms and GitOps\n\nCompanies want their developers to focus on their core business. They create internal platforms to hide the complexity of Day 0-2 operations from their software engineers and still allow the \"shift left\" movement of DevOps. These platforms often involve the welding of several tools.\n\nMany talks presented how the given team or company approached their platform problem and what tools they used, and one could often feel the 18-month sweat of a whole platform team trying to come up with a solution.\n\nThese platforms use either a push- or pull-based model for deployments. No single approach is emerging due to legacy applications and different requirements. While there is a definition of GitOps provided by the [OpenGitOps](https://opengitops.dev/) initiative, several presenters offered their own definitions, including of pull-based deployments.\n\nWe fielded a large-scale survey related to secrets at KubeCon, and learned that users would like help with the [Pipeline Authoring](/direction/verify/pipeline_composition/) workflow.\n\nBesides the wiring of the tools, the industry is still looking for a unified approach to multi-tenancy (there might not be one), and sometimes integrating security processes seems overly challenging.\n\n### How does this affect our roadmap?\n\nThere is a lot of potential in building a platform used as the starting point for internal platforms. Imagine a \"tool\" that shortens the time required to create an internal platform to days or weeks instead of a whole year. This is the GitLab vision of The One DevOps platform.\n\nAs a result, we don't plan any changes in our direction. We will continue investing in the recently started [Deployment direction](/direction/delivery/) to provide all the building blocks for a platform in a single tool and are already actively looking for integrated experiences across our offering.\n\nWe’re working on a CI/CD Component Catalog that includes CI templates. This will [support the Pipeline Authoring workflow](https://gitlab.com/groups/gitlab-org/-/epics/7462).\n\n## Secrets management\n\nOne of the things that often came up in our discussions is secrets management. We fielded a large-scale survey related to secrets at KubeCon, and attendees were glad that we’re thinking about this topic. Security is part of the DevOps discussion, and secrets management is a serious issue, especially in a cloud-native aspect.\n\n- Jenkins, GitHub and GitLab were all mentioned during the secret management discussions.\n- Users would like to offload the secrets management responsibility to another product. In many cases, their security requirements are strict, so they don't want/can't handle secrets by themselves.\n- Hashicorp Vault is a preferred tool (primarily in large enterprise companies working in finance or government) to manage and handle secrets. At the same time, most companies would like to avoid operating one more application in their stack.\n- Open ID Connect [OIDC](https://docs.gitlab.com/ee/integration/openid_connect_provider.html) with the JSON web token (JWT) is an essential direction for us.\n\n### How does this affect our roadmap?\n\nWe should invest more in secrets management since this is a pain our customers would like us to solve, and it's becoming a nonstarter feature for many organizations.\n\nWe want to advance in three main vectors:\n\n- Improve our existing secrets management solution - although we don't have a clear solution, we should improve our current variables capabilities to include additional features that could help users leverage variables for secrets. So it would be a \"good enough\" feature they can use. We are actively working toward this direction by removing some of the limitations we have around [variables and masking](https://gitlab.com/groups/gitlab-org/-/epics/1994).\n- Improve our existing [Hashicorp Vault integration](https://docs.gitlab.com/ee/ci/examples/authenticating-with-hashicorp-vault/) using the JWT token, allowing us to integrate with additional vendors (AWS, AZURE, GCP). Like the previous point, we are moving toward this direction by supporting OIDC and [adding audience claims to our JWT token](https://gitlab.com/groups/gitlab-org/-/epics/7335).\n- We need to develop [a clear strategy for a built-in secrets management solution](/direction/govern/pipeline_security/secrets_management/#next-9-12-monhts). In order to provide our users/customers with choice, GitLab wants to use Hashicorp Vault for secrets management handling. We believe that our approach should be not to build the logic ourselves but to leverage an open source, [cloud native](/topics/cloud-native/) project that we could build into GitLab.\n\n## Infrastructure integrations\n\nInfrastructure integrations came in several flavors during the talks. Some are about cluster management, that is not our focus in this blog. Several presentations show that internal platforms need a strong infrastructure aspect, too. When a new project/microservice is started, it might require a new namespace in the cluster with associated RBAC and policies, optionally storage, a source code management repo with automation, and the appropriate permissions. Deployments might create ephemeral environments or could modify the underlying environment within predefined constraints.\n\nThe top tools mentioned in this area are:\n\n- Terraform\n- Crossplane\n- Pulumi\n\n### How does this affect our roadmap?\n\nGitLab already has [great integrations for Terraform](https://docs.gitlab.com/ee/user/infrastructure/iac/), and the other tools are on our radar, too.\n\nWe are open to integrations but cannot currently prioritize the other integrations on our own. We hope that the community will be interested in contributing to benefit everyone.\n\nBuilding Docker containers might not be necessary to get easy-to-manage container binaries. WASM runtimes become available for Kubernetes, and many programming languages can natively compile to WASM. WASM can provide a secure runtime environment without Docker and might be able to simplify the toolchain developers need to learn.\n\nWe don't plan to add direct WASM support to GitLab yet. The generic package registry can hold WASM modules while their deployment is up to the user.\n\nAt the same time, we see a lot of potential in simple runtime environments built around WASM. While GitLab is not in the business of offering runtime services, we will be actively monitoring the market. We might look into more WASM integrations as we see more demand and tools and services maturing in this space.\n\n## GitLab feedback\n\nIt's great to work on a product where the overall sentiment is positive, both from customers that intensely rely on it and from attendees that have to use other tools but would love to use GitLab or just started to play with it recently.\n\nWe received the following notable mentions as feedback:\n\n- Stability and reliability improved over the last several months.\n- Users love our documentation (primarily around CI) - they mentioned it's easy to use and get started with.\n- Given the size of GitLab and the number of our users, we received feedback about long-outstanding issues. We were happy to respond that we are addressing at least some of them shortly.\n- Several customers had asked if we got some resources for migrating from Jenkins to GitLab.\n- A few customers mentioned that they had to move away from GitLab mainly because of an upper-level decision despite favouring GitLab.\n\n## Conclusions\n\n![The GitLab team](https://about.gitlab.com/images/blogimages/kubecon-gitlab-team.jpg)\n\nWe enjoyed all the talks and were delighted to meet and speak with our users and customers. Thanks to all of you, we could \"feel the pulse\" on how we are doing and validate our direction.\n\nWe hope that this blog will guide those who could not [attend KubeCon](https://about.gitlab.com/events/kubecon/) and serve as a summary for those who did attend. All the recordings will likely be available on YouTube from Jun 6, 2022.\n\nLet us know in the comments if you think we missed some important direction.\n\n_This blog post and linked pages contain information related to upcoming products, features, and functionality.\nIt is important to note that the information presented is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned in this blog and linked pages are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc._\n",[9,814,815,539,727,685],{"slug":2349,"featured":6,"template":688},"the-kubecon-summary-from-a-product-perspective","content:en-us:blog:the-kubecon-summary-from-a-product-perspective.yml","The Kubecon Summary From A Product Perspective","en-us/blog/the-kubecon-summary-from-a-product-perspective.yml","en-us/blog/the-kubecon-summary-from-a-product-perspective",{"_path":2355,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2356,"content":2362,"config":2367,"_id":2369,"_type":13,"title":2370,"_source":15,"_file":2371,"_stem":2372,"_extension":18},"/en-us/blog/top-five-cloud-trends",{"title":2357,"description":2358,"ogTitle":2357,"ogDescription":2358,"noIndex":6,"ogImage":2359,"ogUrl":2360,"ogSiteName":675,"ogType":676,"canonicalUrls":2360,"schema":2361},"Top 5 cloud trends of 2018: What has happened and what’s next","Cloud computing is officially where it's at. Find out who's in the lead and how to plan for the future.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678732/Blog/Hero%20Images/clouds.jpg","https://about.gitlab.com/blog/top-five-cloud-trends","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Top 5 cloud trends of 2018: What has happened and what’s next\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Aricka Flowers\"}],\n        \"datePublished\": \"2018-08-02\",\n      }",{"title":2357,"description":2358,"authors":2363,"heroImage":2359,"date":2364,"body":2365,"category":790,"tags":2366},[2129],"2018-08-02","\nThe cloud has undoubtedly infiltrated the enterprise space – and it is here to stay. Gartner Research predicts that by 2025, 80 percent of companies will have [opted to shutter](https://www.zdnet.com/article/the-data-center-is-dead-heres-what-comes-next/) their traditional data centers. Cloud spend is on the rise, so much so that the International Data Corporation (IDC) recently upped its 2018 prediction for cloud IT infrastructure spending to $57.2 billion, reflecting a 21.3 percent increase over the previous year. With the apparent exponential growth of cloud computing, we decided to root out the top five cloud trends of 2018 and take a look at what might be next:\n\n#### Public cloud use is on the rise.\nMulti-cloud solutions are the primary strategy for large companies, with public cloud use gaining steam. Thirty-eight percent of enterprises represented in the seventh annual [RightScale State of the Cloud\nsurvey](https://www.rightscale.com/learn/cloud-strategy/cloud-computing-trends) have made the public cloud a priority for 2018, up from 29 percent the previous year.\n\nThe industries expected to spend the most on public cloud services in 2018 are discrete manufacturing, professional services, and banking, according to IDC. The telecommunications, banking, and professional services industries are expected to see the most growth in cloud spending over the next five years, with IDC expecting each sector to see increases of almost 25 percent by 2021.\n\n#### Kubernetes is now king.\nThe container orchestration battle is over and Kubernetes has emerged as the undisputed winner. Industry insiders, like Forrester, have [predicted](https://blogs-images.forbes.com/louiscolumbus/files/2017/11/Forr-Cloud-Predictions-2018-2.png) Kubernetes as the winner and now the data proves this out. According to the State of the Cloud survey, Kubernetes shows 27 percent current use while Docker Swarm shows only 12 percent adoption. Use of Mesosphere clocks in at only 6 percent, but the report doesn't distinguish between Marathon or Kubernetes and [Mesosphere supports both](https://mesosphere.com/blog/kubernetes-dcos/). The data could further be skewed by showing container orchestration offerings from AWS, Azure, and Google Cloud as separate segments when in fact they all run Kubernetes. While there's some muddiness in how people are consuming Kubernetes, what's clear is that the market has spoken and Kubernetes is the de facto way to do container orchestration.\n\n#### Azure is hacking away at AWS’s lead in cloud infrastructure services.\nAmazon Web Services has the lion’s share of the infrastructure-as-a-service (IaaS) market, but Microsoft’s Azure is closing the gap with growth that is outpacing its top competitor.\n\nAzure adoption grew by 89 percent in the second quarter, ending Q2 with an 18 percent share of the market, according to a [report by Canalys](https://www.canalys.com/newsroom/cloud-infrastructure-spend-reaches-us%2420-billion-in-q2-2018-with-hybrid-it-approach-dominant), an independent analyst firm. While still in the lead with a 31 percent share of the market, AWS’s second quarter growth was substantially less at 48 percent. Google Cloud rounded out the top three performers of Q2, growing a massive 108 percent during the quarter. Google ended the quarter with an eight percent share of the cloud infrastructure services market. Azure, AWS, and Google Cloud account for 57 percent of the IaaS market, Canalys reports.\n\n#### Enterprise cloud spending is on the rise.\nCompanies are making heavy investments in the cloud, as seen by IDC’s decision to increase their 2018 spending prediction at the half-year mark. The market intelligence agency now expects to see a more than 21 percent increase in cloud infrastructure spending this year, which aligns with reports from enterprise survey respondents.\n\nTwenty percent of enterprises say they plan to more than double their public cloud spend in 2018, according to the State of the Cloud survey and 71 percent of the poll’s 997 respondents expect to increase their public cloud spend by more than 20 percent this year.\n\n#### Security remains a top cloud challenge.\nSecurity regularly ranks as the number one concern among cloud adopters. Seventy-seven percent of State of the Cloud respondents reported security as a challenge, with 29 percent finding it to be a significant hurdle, particularly for beginners. Sixty-six percent of those surveyed in LogicMonitor’s [Cloud Vision 2020: The Future of the Cloud Study](https://www.logicmonitor.com/resource/the-future-of-the-cloud-a-cloud-influencers-survey/) reported security as the biggest challenge for organizations operating in the public cloud.\n\nWith security being a top priority for enterprises working in the cloud, Forrester anticipates that security will become [“integrated with — and integral to — cloud platforms”](/security/) in 2018.\n\nCover photo by [Andrew Ruiz](https://unsplash.com/photos/P45gtJKufJo) on [Unsplash](https://unsplash.com/)\n{: .note}\n",[9,1004,855],{"slug":2368,"featured":6,"template":688},"top-five-cloud-trends","content:en-us:blog:top-five-cloud-trends.yml","Top Five Cloud Trends","en-us/blog/top-five-cloud-trends.yml","en-us/blog/top-five-cloud-trends",{"_path":2374,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2375,"content":2380,"config":2388,"_id":2390,"_type":13,"title":2391,"_source":15,"_file":2392,"_stem":2393,"_extension":18},"/en-us/blog/top-ten-reasons-to-check-out-gitlab-virtual-commit",{"title":2376,"description":2377,"ogTitle":2376,"ogDescription":2377,"noIndex":6,"ogImage":1739,"ogUrl":2378,"ogSiteName":675,"ogType":676,"canonicalUrls":2378,"schema":2379},"Top Ten Reasons to Check Out GitLab's Virtual Commit","An overview of GitLab's Virtual Commit and the content available specific to public sector.","https://about.gitlab.com/blog/top-ten-reasons-to-check-out-gitlab-virtual-commit","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Top Ten Reasons to Check Out GitLab's Virtual Commit\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Jim Riley\"}],\n        \"datePublished\": \"2020-09-14\",\n      }",{"title":2376,"description":2377,"authors":2381,"heroImage":1739,"date":2383,"body":2384,"category":2385,"tags":2386},[2382],"Jim Riley","2020-09-14","\n \n{::options parse_block_html=\"true\" /}\n\n \nThis year the GitLab crew stepped away from everything they knew about creating an amazing, winning conference and reworked the Commit vision to better fit in line with the needs of our changed world. The result was an incredible digital experience. Commit transformed into a 24-hour full conference program filled with practical DevOps strategies shared by leaders in development, operations, and security. Why 24-hours? GitLab has customers, partners and contacts all across the globe and the Commit team saw the virtual environment as an opportunity to make certain everyone had access to all the exciting, featured content and our GitLab team in real time.\n \nGitLab customers and partners shared real world examples of how GitLab is helping their organizations innovate, survive, and succeed @ speed. [Login](https://gitlabcommitvirtual.com/) to view the top ten presentations that showcase how Public Sector is leading digital transformation through GitLab.\n \n - Nicolas Chaillan, Chief Software Officer, US Air Force, United States Air Force and his keynote talk  “DevSecOps in Government and Highly Regulated Industries” \n - How The U.S. Army Cyber School Created ‘Courseware-as-Code’ With GitLab \n - Deployment & Adoption of GitLab in Government \n - DevSecOps At The Brazilian Federal Public Ministry...Exclusively With Open Source Tools \n - DevOps 101: Getting to Minimal Viable 'DevOpsness' \n - Scaling DevOps at the NSA \n - Accelerating Speed to Mission Through Low-to-High Cross Domain Collaboration \n - Enabling the Tactical Edge Through DevSecOps in a Box \n - Cloud-Native Security: Processes And Tools To Protect Modern Applications \n - DevOps 101: Getting to Minimal Viable 'DevOpsness \n \nAfter absorbing the presentations shared at Commit, if you’re finding you’d like to dive a little deeper and explore a bit more, [reach out to us](https://about.gitlab.com/company/contact/) and we’ll be happy to connect with you and keep the conversation going!\n \nTo learn more about GitLab Public Sector, please visit: https://about.gitlab.com/solutions/public-sector/\n","unfiltered",[685,2387,855,9],"DevSecOps",{"slug":2389,"featured":6,"template":688},"top-ten-reasons-to-check-out-gitlab-virtual-commit","content:en-us:blog:top-ten-reasons-to-check-out-gitlab-virtual-commit.yml","Top Ten Reasons To Check Out Gitlab Virtual Commit","en-us/blog/top-ten-reasons-to-check-out-gitlab-virtual-commit.yml","en-us/blog/top-ten-reasons-to-check-out-gitlab-virtual-commit",{"_path":2395,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2396,"content":2402,"config":2407,"_id":2409,"_type":13,"title":2410,"_source":15,"_file":2411,"_stem":2412,"_extension":18},"/en-us/blog/understanding-kubernestes-rbac",{"title":2397,"description":2398,"ogTitle":2397,"ogDescription":2398,"noIndex":6,"ogImage":2399,"ogUrl":2400,"ogSiteName":675,"ogType":676,"canonicalUrls":2400,"schema":2401},"What you need to know about Kubernetes RBAC","Role-based access control is now default, and expected in most Kubernetes deployments. Here's the What, Why and How of RBAC.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678884/Blog/Hero%20Images/understanding-kubernetes-rbac-post-cover.jpg","https://about.gitlab.com/blog/understanding-kubernestes-rbac","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"What you need to know about Kubernetes RBAC\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Abubakar Siddiq Ango\"}],\n        \"datePublished\": \"2018-08-07\",\n      }",{"title":2397,"description":2398,"authors":2403,"heroImage":2399,"date":2404,"body":2405,"category":683,"tags":2406},[980],"2018-08-07","Managing access to resources is an essential part of ensuring the\nreliability, security, and efficiency of any infrastructure, but can quickly\nget complicated to manage. With Kubernetes, attribute-based access control\n(ABAC) is very powerful but complex, while role-based access control (RBAC)\nmakes it easier to manage permissions using kubectl and the Kubernetes API\ndirectly. This post shares how to get started with RBAC and some best\npractices to adopt.\n\n\n## RBAC vs ABAC\n\n\nRBAC made beta [release with Kubernetes\n1.6](https://kubernetes.io/blog/2017/04/rbac-support-in-kubernetes/) and\ngeneral availability [with\n1.8](https://kubernetes.io/blog/2017/10/using-rbac-generally-available-18/).\nA fundamental building block of Kubernetes, RBAC is an authorization\nmechanism for controlling how the Kubernetes API is accessed using\npermissions.\n\n\nRBAC is now preferred over ABAC, which is difficult to manage and\nunderstand. ABAC also requires SSH and root access to make authorization\npolicy changes.\n\n\nResource management can be delegated using RBAC without giving away SSH\naccess to the Cluster Master VM and permission policies can be configured\nusing kubectl or the Kubernetes API itself.\n\n\n## RBAC resources\n\n\nUsing RBAC, Authorizations can be given using a set of permissions that can\nbe limited within a namespace or the entire cluster. To do this, you can\ndefine A set of permission is called a Role, which is defined within a\nnamespace. If you want A role that is cluster-wide, this is defined as a\nClusterRole.\n\n\nBelow, you can see an example of a role definition:\n\n\n### Role\n\n\n```\n\nkind: Role\n\napiVersion: rbac.authorization.k8s.io/v1\n\nmetadata:\n  namespace: default\n  name: pod-reader\nrules:\n\n- apiGroups: [\"\"] # \"\" indicates the core API group\n  resources: [\"pods\"]\n  verbs: [\"get\", \"watch\", \"list\"]\n```\n\n\nLike other Kubernetes resources, a role definition contains kind,\napiVersion, and metadata, but with the addition of rules.\n\n\nFor the rules key, you will define how your permissions will work. You can\nspecify what resources within apiGroup(s) are permitted and how they can be\naccessed using verbs (including `create`, `delete`, `deletecollection`,\n`get`, `list`, `patch`, `update`, and `watch`). The apiGroups key defines\nthe location in the API where the resources are found. If you provide an\nempty value in this list, it means the core API group.\n\n\n### ClusterRole\n\n```\n\nkind: ClusterRole\n\napiVersion: rbac.authorization.k8s.io/v1\n\nmetadata:\n  # \"namespace\" omitted since ClusterRoles are not namespaced\n  name: secret-reader\nrules:\n\n- apiGroups: [\"\"]\n  resources: [\"secrets\"]\n  verbs: [\"get\", \"watch\", \"list\"]\n```\n\n\nThe major difference in the definition for a `ClusterRole` is the absence of\na namespace, because the permissions defined here are cluster-scoped.\nHowever, when referenced by a `RoleBinding`, a `ClusterRole` can be used to\ngrant permissions to namespaced resources defined in the `ClusterRole` role\nwithin the `RoleBinding`’s namespace.\n\n\n### RoleBinding and ClusterRoleBinding\n\n\nA RoleBinding allows you to associate a Role with a user or list of users.\nThis grants the Role permissions to the users. The user(s) are defined under\nsubjects, and the Role association under role references (roleRef). For\nexample:\n\n\n#### RoleBinding:\n\n\n```\n\nkind: RoleBinding\n\napiVersion: rbac.authorization.k8s.io/v1\n\nmetadata:\n  name: read-pods\n  namespace: default\nsubjects:\n\n- kind: User\n  name: abu\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: Role\n  name: pod-reader\n  apiGroup: rbac.authorization.k8s.io\n```\n\n\n#### ClusterRoleBinding:\n\n\n```\n\nkind: ClusterRoleBinding\n\napiVersion: rbac.authorization.k8s.io/v1\n\nmetadata:\n  name: read-secrets-global\nsubjects:\n\n- kind: Group\n  name: manager\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: ClusterRole\n  name: secret-reader\n  apiGroup: rbac.authorization.k8s.io\n```\n\n\n## Best practices\n\n\nApplying the principle of [least\nprivileges](https://medium.com/@haim_50405/establish-least-privileged-best-practice-for-your-kubernetes-clusters-f0785e1aee39)\nis crucial, as it reduces exposure and vulnerability. A few of the essential\nbest practices include:\n\n\n- Be specific with the resources you are granting access to and the verbs\nbeing used; avoid wild cards\n\n- Use Roles instead of Cluster Roles where possible\n\n- Only give permissions required for the specific tasks to be performed by a\nuser and nothing more\n\n- Create and use service accounts for processes and services like\n[Tiller](https://docs.helm.sh/rbac#tiller-and-role-based-access-control)\nthat require permission instead of using the default service accounts\n\n\n## GitLab + RBAC\n\n\nCurrently, integrating GitLab with a Kubernetes cluster with RBAC enabled is\nnot supported. You will need to enable and use the legacy ABAC mechanism\n([see the documentation\nhere](https://docs.gitlab.com/ee/user/project/clusters/index.html#security-implications)).\nRBAC will be supported in [a future\nrelease](https://gitlab.com/gitlab-org/gitlab-ce/issues/29398). This affects\nGitLab.com and all self-managed versions of GitLab.\n\n\n## Learn more\n\n\n- [Controlling\naccess](https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/)\n\n-\n[Authorization](https://kubernetes.io/docs/reference/access-authn-authz/authorization/)\n\n- [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)\n\n- [RBAC and TLS\ncertificates](https://sysdig.com/blog/kubernetes-security-rbac-tls/)\n",[9,727],{"slug":2408,"featured":6,"template":688},"understanding-kubernestes-rbac","content:en-us:blog:understanding-kubernestes-rbac.yml","Understanding Kubernestes Rbac","en-us/blog/understanding-kubernestes-rbac.yml","en-us/blog/understanding-kubernestes-rbac",{"_path":2414,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2415,"content":2421,"config":2426,"_id":2428,"_type":13,"title":2429,"_source":15,"_file":2430,"_stem":2431,"_extension":18},"/en-us/blog/unifylogsmetrics",{"title":2416,"description":2417,"ogTitle":2416,"ogDescription":2417,"noIndex":6,"ogImage":2418,"ogUrl":2419,"ogSiteName":675,"ogType":676,"canonicalUrls":2419,"schema":2420},"How to integrate operation logs and metrics in GitLab","We've added Elasticsearch to our monitoring solution so you can see all your logs and metrics in one view. Here's a first look at this new feature, released in GitLab 12.8.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749666923/Blog/Hero%20Images/logs.png","https://about.gitlab.com/blog/unifylogsmetrics","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to integrate operation logs and metrics in GitLab\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Dov Hershkovitch\"}],\n        \"datePublished\": \"2020-03-03\",\n      }",{"title":2416,"description":2417,"authors":2422,"heroImage":2418,"date":2423,"body":2424,"category":683,"tags":2425},[1420],"2020-03-03","\nLogging is one of the most powerful tools we have when trying to understand an application problem. It is a common practice – when things go wrong in production, one of the first requests is often, \"Please send me the logs!\" Raw logs contain useful information which can help pinpoint the root cause(s) of problems.\n\nBut using raw logs isn’t always a straightforward process. This is especially true, in a modern, distributed, and often ephemeral architecture. Ideally logs should be available across the entire application, be searchable and offer at least some access to past history. Historically aggregated logging solutions, if they existed, were only piecemeal. This forced developers to spend time and energy tracking down important log data which ultimately resulted in logs being far less useful than they could be.\n\nWith the [12.8 release](/releases/2020/02/22/gitlab-12-8-released/), to ease navigating through logs, we added [Elastic log Stack](https://docs.gitlab.com/ee/integration/advanced_search/elasticsearch.html) as our log aggregation tool and [Log Explorer](/releases/2020/02/22/gitlab-12-8-released/#explore-aggregated-logs) so you can interact with all your logs in one place.\n\nBut before we look at the logging capabilities, let’s take a step back and look at the big picture.\n\n##  Why monitoring matters\n\nAt GitLab, we aim to provide users with a complete [DevSecOps platform](/solutions/security-compliance/), delivered as a single application. To do so, we have divided the DevSecOps lifecycle into [ten different stages](/stages-devops-lifecycle/). The final Ops stage of the [DevOps](/topics/devops/) loop, [Monitor](/direction/monitor/), should occur right after the production environment is configured and the application deployed. This is a critical stage that should not be ignored.\n\nIn fact, it’s commonly believed in the DevOps world that no developer should ship code into production without monitoring, as it will help ensure an application behaves as expected. If something isn’t right, you will be alerted, (hopefully before your users start to complain). If you are thinking about ignoring monitoring, always remember _customers_ are the most expensive monitoring solution you can have.\n\n### Chasing Observability\n\nObservability is the ability to infer internal states of a system based on the system’s external outputs. Monitoring, on the other hand is the activity of observing the state of a system over time. To achieve observability, your system’s telemetries, including metrics, traces, and logs should all be available to enable proactive introspection and enable greater operational visibility. We believe that DevOps practitioners should have observability as a goal.\n\nGitLab’s vision for the Monitor category is to build a consolidated and integrated observability tool which will, over time, displace today's front-runner in modern observability. In pursuit of this vision and to focus our efforts, we are building our solutions with a cloud native first principle to solve the cloud native problem by selecting the open source products which are cloud native compatible. And, in fact, as part of GitLab’s [New Year’s gift for 2020](/blog/observability/) we're moving a big portion of the observability features – custom metrics, logging, tracing and alerting – from our proprietary codebase to the open source codebase this year.\n\n### Metrics & Traces\n\nToday, if you use GitLab to deploy your application into a Kubernetes cluster, with a push of a button you can deploy monitoring (via a Prometheus instance) into that cluster. [Prometheus](https://prometheus.io/) will automatically start collecting key metrics from your deployed application (such as latency, error rate, and throughput), and send it directly into GitLab UI. In addition to the out-of-the-box metrics and dashboard, it is possible to customize Prometheus directly from the GitLab UI (using PromQL) to collect any metric you desire and present it on a dashboard. You can set up a threshold, create an alert on it, and open an issue as a part of an incident management solution, all without leaving the GitLab UI.\n\nAs a developer, when there is an issue - you want to drill down to the exact function or micro service that is causing the trouble. GitLab uses [Jaeger](https://www.jaegertracing.io/), an end-to-end distributed tracing system for microservices-based distributed systems.\n\n## Get started with logs\n\nBefore the 12.8 release, existing Monitor stage users already had the ability to view pod logs directly from within the GitLab UI. However, this was done only through the available Kubernetes APIs. Viewing logs with the Kubernetes APIs is limited to allowing a log-tailing experience on a specific pod from multiple environments only.\n\nWith the 12.8 release any user can deploy Elastic stack - a specific flavor of Elasticsearch alongside a component called [Filebeat](https://www.elastic.co/beats/filebeat) - to a Kubernetes cluster with the push of a button, (similar to the way we deploy Prometheus). Once deployed, it automatically starts collecting all logs that are coming from the cluster and applications across the available environments (production, staging, testing, etc.) and they will be surfaced in the GitLab UI. In addition users can also navigate directly from the metric chart to the log explorer while preserving the context.\n\nThis is extremely critical when triaging an incident or validating the status of your service. In the cloud-native world aggregation of logs for observability becomes critical as logs are distributed across multiple pods and services. Using our new solution you will be able to get an aggregated view of all logs across multiple services and infrastructures, go back in time, search through logs, and more.\n\n##  What's next\n\nI hope you found this overview useful. To get started, download GitLab and read its documentation for more in-depth coverage of the functionality. One of the fastest ways to experience GitLab features is to use the .com version — which is a hosted GitLab.\n\nIf you would like to get in touch with the Monitoring team please comment and contribute to the linked [issues](https://gitlab.com/groups/gitlab-org/-/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=group%3A%3Aapm&label_name[]=Category%3ALogging) and [epics](https://gitlab.com/groups/gitlab-org/-/epics?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=group%3A%3Aapm&label_name[]=Category%3ALogging) on this page. Sharing your feedback directly on GitLab.com is the best way to contribute to our strategy and vision.\n\nIf you're a GitLab user and have direct knowledge of your logging usage, we'd especially love to hear your use case(s).\n\nWatch my entire YouTube video on logging:\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/hWclZHA7Dgw\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n",[9,835,984],{"slug":2427,"featured":6,"template":688},"unifylogsmetrics","content:en-us:blog:unifylogsmetrics.yml","Unifylogsmetrics","en-us/blog/unifylogsmetrics.yml","en-us/blog/unifylogsmetrics",{"_path":2433,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2434,"content":2440,"config":2445,"_id":2447,"_type":13,"title":2448,"_source":15,"_file":2449,"_stem":2450,"_extension":18},"/en-us/blog/use-gitlab-to-detect-vulnerabilities",{"title":2435,"description":2436,"ogTitle":2435,"ogDescription":2436,"noIndex":6,"ogImage":2437,"ogUrl":2438,"ogSiteName":675,"ogType":676,"canonicalUrls":2438,"schema":2439},"How to use GitLab security features to detect log4j vulnerabilities","Detailed guidance to help customers detect vulnerabilities.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749666816/Blog/Hero%20Images/security-cover.png","https://about.gitlab.com/blog/use-gitlab-to-detect-vulnerabilities","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to use GitLab security features to detect log4j vulnerabilities\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"GitLab\"}],\n        \"datePublished\": \"2021-12-15\",\n      }",{"title":2435,"description":2436,"authors":2441,"heroImage":2437,"date":2442,"body":2443,"category":300,"tags":2444},[1166],"2021-12-15","_Note: Out of an abundance of caution, we encourage users who are using\nolder versions of GitLab SAST and Dependency Scanning to update to the\nlatest versions. You can find more information and recommended actions in\n[this blog\npost](/blog/updates-and-actions-to-address-logj-in-gitlab/)._\n\n\n_Any customer leveraging the [recommended\nincludes](https://docs.gitlab.com/ee/user/application_security/sast/#configure-sast-in-your-cicd-yaml)\nfor GitLab SAST has automatically received the new patched versions released\nDec 13, 2021._\n\n\nIn light of the recently discovered log4j vulnerabilities, we would like to\ndemonstrate how GitLab can be used to assess and remediate the log4j\nvulnerability as well as other security vulnerabilities that may exist in\nyour projects.\n\n\nThe solutions shared here are: \n\n* [Dependency Scanning\n(Ultimate)](#use-gitlab-dependency-scanning-to-detect-and-mitigate-log4j-vulnerabilities)\n\n* [Container Scanning\n(Ultimate)](#detect-log4j-vulnerabilities-with-container-scanning)\n\n* [Cluster image scanning\n(Ultimate)](#detect-vulnerable-containers-in-your-kubernetes-cluster)\n\n* [Advanced Search\n(Premium)](#search-gitlab-projects-which-use-the-log4j-java-library)\n\n\nFree users wishing to access Premium and Ultimate features can do so by\nsigning up for a [free trial](https://about.gitlab.com/free-trial/) of\nGitLab. \n\n\n### Use GitLab dependency scanning to detect and mitigate log4j\nvulnerabilities \n\n\n[Dependency\nscanning](https://docs.gitlab.com/ee/user/application_security/dependency_scanning)\nuses Gemnasium, which has been\n[updated](https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/merge_requests/11381)\nto detect the log4j vulnerability, to automatically find security\nvulnerabilities in your software dependencies.\n\n\nLet’s try dependency scanning with a vulnerable project. Navigate to `Create\nnew project > Import project > from URL` and use\n`https://github.com/christophetd/log4shell-vulnerable-app.git`. \n\n\nNext, navigate to `Security & Compliance > Security dashboard` and select to\nconfigure `Dependency Scanning`. This will create a new merge request\nenabling the dependency scanner, and you can immediately see the first\n[scanning\nresults](https://gitlab.com/gitlab-de/playground/log4shell-vulnerable-app/-/pipelines/427550530/security)\nin the [merge\nrequest](https://gitlab.com/gitlab-de/playground/log4shell-vulnerable-app/-/merge_requests/1). \n\n\nAlternatively, you can edit the `.gitlab-ci.yml` configuration file and\ninclude the Dependency Scanning CI/CD template.\n\n\n```yaml\n\ninclude:\n\n- template: Security/Dependency-Scanning.gitlab-ci.yml\n\n```\n\n\nCreate a new merge request and wait for the pipeline to finish. Inspect the\nsecurity reports. \n\n\n![GitLab security\nreport](https://about.gitlab.com/images/blogimages/2021-12-15-use-gitlab-to-detect-log4j/image2.png){:\n.shadow}\n\n\nTake action on the critical vulnerability, open the details and create a new\nconfidential security issue to follow-up. \n\n\n![Details of security\nvulnerability](https://about.gitlab.com/images/blogimages/2021-12-15-use-gitlab-to-detect-log4j/image9.png){:\n.shadow}\n\n\nAfter merging the MR to add dependency scanning, future MRs and code changes\nwill detect the log4j vulnerabilities. This helps to avoid accidentally\nintroducing older versions again. Open the `Security report` in `Security &\nCompliance` to get an overview of the vulnerabilities. \n\n\n![Panel showing security\nvulnerabilities](https://about.gitlab.com/images/blogimages/2021-12-15-use-gitlab-to-detect-log4j/image4.png){:\n.shadow}\n\n\nYou can customize the default settings using [CI/CD\nvariables](https://docs.gitlab.com/ee/user/application_security/dependency_scanning/#customizing-the-dependency-scanning-settings),\nfor example increasing the log level to debug with `SECURE_LOG_LEVEL:\n‘debug’`. \n\n\nThe project created in the examples above is located\n[here](https://gitlab.com/gitlab-de/playground/log4shell-vulnerable-app). \n\n\n### Detect log4j vulnerabilities with Container Scanning\n\n\nVulnerabilities in container images can come not only from the source code\nfor the application, but also from packages and libraries that are installed\non the base image. Images can inherit packages and vulnerabilities from\nother container images using the `FROM` keyword in a `Dockerfile`.\n[Container\nScanning](https://docs.gitlab.com/ee/user/application_security/container_scanning/)\nhelps detect these vulnerabilities for the Operating System including\npackages. The latest release adds language vulnerability scans as a new\noptional feature to help detect the log4j library vulnerability using the\nunderlying scanners (Trivy as default, Grype optional). You can also use\nthis capability to scan remote images using the `DOCKER_IMAGE` variable.\n\n\nYou can enable the `CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN` variable to\n[scan for language specific\npackages](https://docs.gitlab.com/ee/user/application_security/container_scanning/#report-language-specific-findings).\nPlease note that the additionally detected language dependencies can cause\nduplicates when you enable Dependency Scanning too. \n\n\nTo try it, navigate to `CI/CD > Pipeline Editor` and add the following\nconfiguration for Container Scanning:\n\n\n```yaml\n\ninclude:\n    - template: Security/Container-Scanning.gitlab-ci.yml\n\nvariables:\n    # Use Trivy or Grype as security scanners (Trivy is the default in the included template)\n    # CS_ANALYZER_IMAGE: \"registry.gitlab.com/security-products/container-scanning/trivy:4\"\n    # CS_ANALYZER_IMAGE: \"registry.gitlab.com/security-products/container-scanning/grype:4\"\n    # Detect language libraries as dependencies\n    CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN: \"false\"\n    # Test the vulnerable log4j image \n    DOCKER_IMAGE: registry.gitlab.com/gitlab-de/playground/log4shell-vulnerable-app:latest \n```\n\n\nCreate a new branch, commit the changes and create a new MR. Once the\npipeline has completed, inspect the security report in the MR. \n\n\n![List of vulnerabilities detected by container\nscanning](https://about.gitlab.com/images/blogimages/2021-12-15-use-gitlab-to-detect-log4j/image6.png){:\n.shadow}\n\n\nAfter merging the MR, you can view the vulnerabilities that exist in your\ndefault branch by navigating to `Security & Compliance > Vulnerability\nReport`. \n\n\n![Panel showing security\nvulnerabilities](https://about.gitlab.com/images/blogimages/2021-12-15-use-gitlab-to-detect-log4j/image7.png){:\n.shadow}\n\n\nInspect the vulnerability details to take action.\n\n\n![Detail on\nvulnerability](https://about.gitlab.com/images/blogimages/2021-12-15-use-gitlab-to-detect-log4j/image8.png){:\n.shadow}\n\n\nThis feature is available for customers using the default CI/CD templates,\nor the tagged `:4` scanner images from  GitLab's Container Registry\n(registry.gitlab.com). If you are using custom images, please rebuild them\nbased on the latest release.\n\n\n### Detect vulnerable containers in your Kubernetes cluster\n\n\nYou can use [cluster image scanning in\nKubernetes](https://docs.gitlab.com/ee/user/clusters/agent/vulnerabilities.html)\nwhich uses Starboard and [uses Trivy as a security\nscanner](https://aquasecurity.github.io/starboard/v0.13.1/integrations/vulnerability-scanners/trivy/)\nunder the hood. Trivy’s vulnerability DB is able to detect CVE-2021-44228.\n\n\nLet’s try it! A quick way to bring up a Kubernetes cluster is in Civo Cloud.\nCreate an account, and follow the documentation on [how to set up the\nCLI](https://www.civo.com/learn/kubernetes-cluster-administration-using-civo-cli)\nwith an API token. Next, create a k3s cluster. \n\n\n```shell\n\n$ civo kubernetes create log4j\n\n$ civo kubernetes config log4j --save\n\n$ kubectl config use-context log4j\n\n$ kubectl get node\n\n```\n\n\n`registry.gitlab.com/gitlab-de/playground/log4shell-vulnerable-app:latest`\nprovides a vulnerable container image we can deploy and then scan. \n\n\n```shell\n\n$ vim deployment.yaml\n\n\napiVersion: apps/v1\n\nkind: Deployment\n\nmetadata:\n  name: log4j\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: log4j\n  template:\n    metadata:\n      labels:\n        app: log4j\n    spec:\n      containers:\n        - image: registry.gitlab.com/gitlab-de/playground/log4shell-vulnerable-app:latest\n          name: log4j\n\n$ kubectl apply -f deployment.yaml\n\n```\n\n\n```shell\n\n$ vim service.yaml\n\n\napiVersion: v1\n\nkind: Service\n\nmetadata:\n  name: log4j\n  labels:\n    app: log4j\nspec:\n  ports:\n    - name: \"log4j\"\n      port: 8080\n  selector:\n    app: log4j\n\n$ kubectl apply -f service.yaml\n\n```\n\n\nTest the application container with port forwarding, and open your browser\nat http://localhost:80808. You can close the connection with `ctrl+c`. \n\n\n```\n\n$ kubectl port-forward service/log4j 8080:8080\n\n```\n\n\nAfter the deployment is finished, let’s add the cluster image scanning\nintegration. Follow the [Starboard\nOperator](https://aquasecurity.github.io/starboard/v0.13.1/operator/installation/kubectl/)\ninstallation documentation. Next, configure the [Kubernetes Cluster Image\nScanning](https://docs.gitlab.com/ee/user/clusters/agent/vulnerabilities.html)\nwith GitLab. \n\n\nThe final step is to integrate the CI/CD template and run the pipelines. \n\n\n```yaml\n\ninclude:\n  - template: Security/Cluster-Image-Scanning.gitlab-ci.yml\n```\n\n\nNavigate into `Security & Compliance > Vulnerability report` and select the\n`Operational vulnerabilities` tab to inspect the vulnerabilities. There you\ncan see that `log4j` was detected in the deployed application running in our\nKubernetes cluster 💜. \n\n\n![Panel showing security\nvulnerabilities](https://about.gitlab.com/images/blogimages/2021-12-15-use-gitlab-to-detect-log4j/image5.png){:\n.shadow}\n\n\nInspect the `log4j` vulnerability to see more details. \n\n\n![Detail on\nvulnerability](https://about.gitlab.com/images/blogimages/2021-12-15-use-gitlab-to-detect-log4j/image3.png){:\n.shadow}\n\n\nThe full project is located\n[here](https://gitlab.com/gitlab-de/playground/log4j-kubernetes-container-scanning).\n\n\n### Search GitLab projects which use the log4j Java library\n\n\nYou can use the [advanced search with scope\nblobs](https://docs.gitlab.com/ee/api/search.html#scope-blobs). Let’s try\nit! Navigate to your profile and add a new personal access token (PAT).\nExport it into the environment to access it in the next step:\n\n\n```shell\n\n$ export GITLAB_TOKEN=xxxxxxxxx\n\n\n$ curl --header \"PRIVATE-TOKEN: $GITLAB_TOKEN\"\n\"https://gitlab.com/api/v4/search?scope=blobs&search=log4j\" \n\n```\n\n\nTip: Install jq to format the JSON body. More insights in [this blog\npost](/blog/devops-workflows-json-format-jq-ci-cd-lint/). \n\n\n```shell\n\n$ curl --header \"PRIVATE-TOKEN: $GITLAB_TOKEN\"\n\"https://gitlab.com/api/v4/search?scope=blobs&search=log4j\" | jq\n\n  {\n    \"basename\": \"src/main/resources/log4j\",\n    \"data\": \"log4j.rootLogger=ERROR, stdout\\n \\n# Direct log messages to stdout\\n\",\n    \"path\": \"src/main/resources/log4j.properties\",\n    \"filename\": \"src/main/resources/log4j.properties\",\n    \"id\": null,\n    \"ref\": \"9a1df407e1a5365950a77f715163f6dba915fdf4\",\n    \"startline\": 2,\n    \"project_id\": 12345678\n  },\n\n```\n\n\nYou can use `jq` to further transform and filter the result set, for example\nonly listing the paths where `log4j` as a string exists.  \n\n\n```\n\ncurl --header \"PRIVATE-TOKEN: $GITLAB_TOKEN\"\n\"https://gitlab.com/api/v4/search?scope=blobs&search=log4j\" | jq -c '.[] |\nselect (.path | contains (\"log4j\"))' | jq\n\n```\n\n\n### Next steps \n\n\nThe GitLab security team is continuing to proactively monitor the situation\nand ensure our product and customers are secure. We will continue to\ncommunicate should we identify additional opportunities to help our\ncustomers and community navigate through this situation. Please [subscribe\nto our security alerts mailing\nlist](https://about.gitlab.com/company/preference-center/).\n\n\nPlease visit the public [log4j-resources\nproject](https://gitlab.com/gitlab-de/log4j-resources) and visit our\n[forum](https://forum.gitlab.com/c/devsecops-security/) for additional\ninformation.\n",[855,9,748],{"slug":2446,"featured":6,"template":688},"use-gitlab-to-detect-vulnerabilities","content:en-us:blog:use-gitlab-to-detect-vulnerabilities.yml","Use Gitlab To Detect Vulnerabilities","en-us/blog/use-gitlab-to-detect-vulnerabilities.yml","en-us/blog/use-gitlab-to-detect-vulnerabilities",{"_path":2452,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2453,"content":2459,"config":2464,"_id":2466,"_type":13,"title":2467,"_source":15,"_file":2468,"_stem":2469,"_extension":18},"/en-us/blog/what-is-cloud-native",{"title":2454,"description":2455,"ogTitle":2454,"ogDescription":2455,"noIndex":6,"ogImage":2456,"ogUrl":2457,"ogSiteName":675,"ogType":676,"canonicalUrls":2457,"schema":2458},"A beginner's guide to cloud native","If you’re a little fuzzy on what makes an application cloud native, this explainer will help you get up to speed.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749665811/Blog/Hero%20Images/daytime-clouds.jpg","https://about.gitlab.com/blog/what-is-cloud-native","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"A beginner's guide to cloud native\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Aricka Flowers\"}],\n        \"datePublished\": \"2018-10-08\",\n      }",{"title":2454,"description":2455,"authors":2460,"heroImage":2456,"date":2461,"body":2462,"category":790,"tags":2463},[2129],"2018-10-08","\n\n## What is cloud native? Everything you need to know\n\nThe term [cloud native](/topics/cloud-native/) has been bandied about in the tech world a lot over the last few years, but it's still often misunderstood. Although it's an important part, simply being run in the cloud does not make an application cloud native; it must also be built in the cloud. One of the first and currently largest cloud computing providers, Amazon Web Services paved the way for cloud native app development, a now more than [$20 billion market](https://www.canalys.com/newsroom/cloud-infrastructure-spend-reaches-us%2420-billion-in-q2-2018-with-hybrid-it-approach-dominant) as of this year. With its growing popularity, organizations like the Cloud Native Computing Foundation (CNCF) have sprouted to help foster the growth of cloud native app development.\n\n[CNCF](https://www.cncf.io/), an open source software organization focused on promoting the cloud-based app building and deployment approach, [defines cloud native](https://github.com/cncf/toc/blob/master/DEFINITION.md) as the following:\n\n>Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.\n>\n>Containers, service meshes, [microservices](/topics/microservices/), immutable infrastructure, and declarative APIs exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.\n\nTo break it down even further: For an application to be cloud native, it must be built and run in the cloud. This requires multiple tools that allow app developers to make use of the architectural advantages of cloud infrastructure.\n\n## There are three main building blocks of cloud native architecture\n\n### Containers\n\n[Containers](/blog/containers-kubernetes-basics/) are an [alternative way to package applications](https://searchitoperations.techtarget.com/tip/What-are-containers-and-how-do-they-work) versus building for VMs or physical servers directly. Containers can run inside of a virtual machine or on a physical server. Containers hold an application’s libraries and processes, but don't include an operating system, making them lightweight. In the end, fewer servers are needed to run multiple instances of an application which reduces cost and makes them easier to scale. Some other [benefits of containers](https://tsa.com/top-5-benefits-of-containerization/) include faster deployment, better portability and scalability, and improved security.\n\n### Orchestrators\n\nOnce the containers are set, an orchestrator is needed to get them running. Container orchestrators direct how and where containers run, fix any that go down and determine if more are needed. When it comes to container orchestrators, also known as schedulers, Kubernetes is the [clear cut market winner](/blog/top-five-cloud-trends/).\n\n### Microservices\n\nThe last main component of cloud native computing is microservices. In order to make apps run more smoothly, they can be broken down into smaller parts, or microservices, to make them easier to scale based on load. Microservices infrastructure also makes it easier – and faster – for engineers to develop an app. Smaller teams can be formed and assigned to take ownership of individual components of the app’s development, allowing engineers to code without potentially impacting another part of the project.\n\nWhile public cloud services like AWS offer the opportunity to build and deploy applications easily, there are times when it makes sense to build your own infrastructure. A private or hybrid cloud solution is generally needed when sensitive data is processed within an application or industry regulations call for increased controls and security.\n\n## How to streamline cloud native development\n\nAs you can see, cloud native app development requires the incorporation of several tools for a successful deployment. It begs for a DevOps approach to efficiently streamline the multiple elements needed to get an app up and running in the cloud. This is where GitLab comes in.\n\nWe're aiming to make GitLab the best place to build cloud native apps. With [built-in registry](https://docs.gitlab.com/ee/user/packages/container_registry/index.html) and [Kubernetes integrations](https://docs.gitlab.com/ee/user/project/clusters/index.html) we're always working to offer new ways to simplify toolchains and speed up cycle times, making it easier to transition to a cloud native environment.\n\nFor a deeper dive into cloud native, check out our training video by GitLab Product Marketing Manager [William Chia](/company/team/#thewilliamchia):\n\n\u003C!-- blank line -->\n\u003Cfigure class=\"video_container\">\n  \u003Ciframe src=\"https://www.youtube.com/embed/jc5cY3LoOOI?start=90\" frameborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\u003C/figure>\n\u003C!-- blank line -->\n\nCover photo by [Sam Schooler](https://unsplash.com/photos/E9aetBe2w40) on [Unsplash](https://unsplash.com/)\n{: .note}\n",[727,9],{"slug":2465,"featured":6,"template":688},"what-is-cloud-native","content:en-us:blog:what-is-cloud-native.yml","What Is Cloud Native","en-us/blog/what-is-cloud-native.yml","en-us/blog/what-is-cloud-native",{"_path":2471,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2472,"content":2477,"config":2482,"_id":2484,"_type":13,"title":2485,"_source":15,"_file":2486,"_stem":2487,"_extension":18},"/en-us/blog/what-to-expect-at-predict-2019",{"title":2473,"description":2474,"ogTitle":2473,"ogDescription":2474,"noIndex":6,"ogImage":782,"ogUrl":2475,"ogSiteName":675,"ogType":676,"canonicalUrls":2475,"schema":2476},"2019 cloud native predictions from the Predict 2019 Conference","Break out your sunglasses, because the cloud native forecast for 2019 is sunny.","https://about.gitlab.com/blog/what-to-expect-at-predict-2019","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"2019 cloud native predictions from the Predict 2019 Conference\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Tina Sturgis\"}],\n        \"datePublished\": \"2018-12-12\",\n      }",{"title":2473,"description":2474,"authors":2478,"heroImage":782,"date":2479,"body":2480,"category":790,"tags":2481},[1647],"2018-12-12","\n\nGet the latest 2019 predictions from GitLab and other industry experts. [Sign me up](https://predict2019.com/#join-us)!\n{: .alert .alert-info}\n\nI love this time of year!  But it isn't for the reasons you may be thinking ... it's not the holiday decorations, shopping for gifts for loved ones ... it is about PREDICTIONS! Yep, I am a prediction junkie! I love to stop, do a little research as the end of December rolls around, reflect on what happened in that year, and begin to forecast trends I believe will emerge in the new year.\n\nThis year, one of the most exciting areas I wanted to dive into a prediction of is [cloud native](/topics/cloud-native/). It is no longer just a ‘fad,’ enterprises are realizing benefits from adopting cloud native. So I got together with my closest GitLab team-members and we dove in to provide you with our top five predictions.\n\n## Top predictions around cloud native\n\nThe basis for cloud native applications to flourish has been set and we believe that 2019 will be a great cloud native year.\n\n* Enterprises will adopt a [multi-cloud strategy](https://medium.com/gitlab-magazine/multi-cloud-maturity-model-2de185c01dd7) for their long-term investments.\n* The cloud native stack is maturing with tools like Kubernetes, Prometheus, and Envoy.\n* We are going to see a lot more on [serverless](/topics/serverless/) with the likes of Lambda and Knative.\n* We will see some real movement in the application of artificial intelligence and machine learning.\n\n## What about DevOps and security predictions?\n\nOnce we completed our research and position on cloud native predictions, we teamed up with [DevOps.com](https://www.devops.com) to participate in their on-demand virtual conference, [Predict 2019](https://predict2019.com/#join-us), that includes predictions around cloud security, DevOps, and quality testing with a [cast of speakers](https://predict2019.com/#speakers) that will educate and inspire you as you move into 2019!\n\n[Sign up now to attend Predict 2019](https://predict2019.com/#join-us)!\n{: .alert .alert-info}\n\nPhoto by [Marc Wieland](https://unsplash.com/photos/zrj-TPjcRLA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/search/photos/clouds?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)\n{: .note}\n",[685,9,727,278],{"slug":2483,"featured":6,"template":688},"what-to-expect-at-predict-2019","content:en-us:blog:what-to-expect-at-predict-2019.yml","What To Expect At Predict 2019","en-us/blog/what-to-expect-at-predict-2019.yml","en-us/blog/what-to-expect-at-predict-2019",{"_path":2489,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2490,"content":2496,"config":2501,"_id":2503,"_type":13,"title":2504,"_source":15,"_file":2505,"_stem":2506,"_extension":18},"/en-us/blog/whitesource-for-dependency-scanning",{"title":2491,"description":2492,"ogTitle":2491,"ogDescription":2492,"noIndex":6,"ogImage":2493,"ogUrl":2494,"ogSiteName":675,"ogType":676,"canonicalUrls":2494,"schema":2495},"How to secure your dependencies with GitLab and WhiteSource","We walk you through how to configure WhiteSource in your GitLab instance to enhance your application security.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749663445/Blog/Hero%20Images/snowymtns.jpg","https://about.gitlab.com/blog/whitesource-for-dependency-scanning","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"How to secure your dependencies with GitLab and WhiteSource\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Fernando Diaz\"}],\n        \"datePublished\": \"2020-08-10\",\n      }",{"title":2491,"description":2492,"authors":2497,"heroImage":2493,"date":2498,"body":2499,"category":855,"tags":2500},[896],"2020-08-10","GitLab's WhiteSouce integration empowers developers to enhance application\nsecurity\n\ndirectly within the GitLab UI. The integration provides dependency scanning\nwith\n\nin-depth analysis, along with actionable insights, and auto-remediation.\nWhiteSource for\n\nGitLab enhances your team's productivity, security, and compliance.\n\n\n[Rhys Arkins](https://twitter.com/rarkins?lang=en), Product Director at\nWhiteSource, and I hosted a webinar on \"[Harnessing development to scale\nAppSec](/webcast/scalable-secure-ci/)\"\n\nshowcasing the features of GitLab's WhiteSource integration for open source\ndependency scanning.\n\n\n\u003C!-- blank line -->\n\n\u003Cfigure class=\"video_container\">\n\n\u003Ciframe src=\"https://www.youtube-nocookie.com/embed/yJpE_ACt9og\"\nframeborder=\"0\" allowfullscreen=\"true\"> \u003C/iframe>\n\n\u003C/figure>\n\n\u003C!-- blank line -->\n\n\nThis blog post will guide you through setting up WhiteSource in your private\nGitLab instance\n\nand show you how the integration with WhiteSource enhances your\napplication's security within GitLab.\n\n\n## Installing the WhiteSource integration\n\n\nFirst, let's go over how to install the WhiteSource integration. In this\nsection, I will review how to\n\nset up GitLab service credentials, generate a WhiteSource configuration,\n\nbuild WhiteSource containers, and how to run them in a Kubernetes cluster.\n\n\n### Requirements for WhiteSource integration\n\n\nBut first, the WhiteSource integration requires that you have the following\nsetup:\n\n\n- [GitLab on-prem instance](/pricing/#self-managed): The GitLab instance\nwhere the WhiteSource integration will run.\n\n- [WhiteSource\naccount](https://www.whitesourcesoftware.com/whitesource-pricing/): Provides\naccess to the WhiteSource integration.\n\n- [Kubernetes cluster](/solutions/kubernetes/): Deploys the WhiteSource\ncontainers.\n\n\n### Create GitLab service credentials\n\n\nThe next step is to create GitLab service credentials. This can be\naccomplished in three simple steps:\n\n\n- In your GitLab instance, go to `Admin Area > System Hooks` and create a\nsystem hook as follows:\n    - **URL:** `https://whitesource.INGRESS_URL.com/payload`\n    - **Secret Token:** Make up a token, you can use `openssl rand -base64 12`\n    - **Trigger:** All except `Tag push events`\n    - **Enable SSL Verification:** `Yes`\n\n  Note: Make sure you save the secret token for use in the next section.\n- Create a user named `@whitesource`, with a developer role. An email is not\nrequired.\n\n- As the `@whitesource` user, go to `Settings > Access tokens` and create a\npersonal access token:\n    - **Name:** `WhiteSourceToken`\n    - **Scopes:** `all`\n- Remember to save the access token for use in the next section.\n\n\n### Generate the WhiteSource configuration\n\n\nNext, we generate the WhiteSource configuration, which is used to configure\nthe WhiteSource integration containers.\n\nThis can be done in a few simple steps:\n\n\n- Login to\n[WhiteSource](https://saas.whitesourcesoftware.com/Wss/WSS.html#!login) and\nclick on\n\nthe `Integrate` tab.\n\n\n![whitesource webpage\nview](https://about.gitlab.com/images/whitesource-integration/whitesource_webpage_view.png)\n\nWhiteSource mainpage\n\n{: .note.text-center}\n\n\n- Expand the `WhiteSource for GitLab server` bar and fill the following:\n    - **GitLab Server API URL:** `https://GITLAB_SERVER_URL/api/v4`\n    - **GitLab Webhook URL:** `https://whitesource.INGRESS_URL.com/payload`\n    - **GitLab Webhook secret:** Use the same secret generated in GitLab credentials section\n    - **GitLab personal access token:** `@whitesource` user access token\n\n![whitesource integration\nview](https://about.gitlab.com/images/whitesource-integration/whitesource_integration_setup.png)\n\nWhiteSource integrations page\n\n{: .note.text-center}\n\n\n- Press `Get Activation Key` and copy the generated key\n\n- Open the\n[wss-configurator](https://gitlab.com/fjdiaz/whitesource-helm/-/blob/master/whitesource/wss-configuration/index.html)\nwith your browser\n\n- Select `Export` from the menu, and select the\n[prop.json](https://gitlab.com/fjdiaz/whitesource-helm/-/blob/master/whitesource/wss-configuration/config/prop.json)\n\n- Click on the `General` tab\n\n- Paste the generated key and click `Export` to save a new `prop.json` file\n\n\n### Build the WhiteSource containers\n\n\n- Move the generated prop.json from the previous section to\n[wss-gls-app](https://gitlab.com/fjdiaz/whitesource-helm/-/tree/master/whitesource/wss-gls-app/docker/conf),\n[wss-remediate](https://gitlab.com/fjdiaz/whitesource-helm/-/tree/master/whitesource/wss-remediate/docker/src),\nand\n[wss-scanner](https://gitlab.com/fjdiaz/whitesource-helm/-/tree/master/whitesource/wss-scanner/docker/conf).\n\n- Build and push the Docker containers:\n\n\n```bash\n\n$ docker build -t wss-gls-app:19.12.2 whitesource/wss-gls-app/docker\n\n$ docker push wss-gls-app:19.12.2\n\n\n$ docker build -t wss-scanner:19.12.1.2 whitesource/wss-scanner/docker\n\n$ docker push wss-scanner:19.12.1.2\n\n\n$ docker build -t wss-remediate:19.12.2 whitesource/wss-remediate/docker\n\n$ docker push wss-remediate:19.12.2\n\n```\n\n\n### Running the WhiteSource containers\n\n\nGitLab provides native Kubernetes cluster integration. This means that\nGitLab allows you\n\nto deploy software from [GitLab CI/CD](/topics/ci-cd/) pipelines directly to\nyour Kubernetes cluster.\n\n\nWhiteSource containers can be deployed and managed within the same\nKubernetes cluster\n\nused to deploy your application, all by running a simple Helm commands.\n\n\n- Download the WhiteSource [Helm\nchart](https://gitlab.com/fjdiaz/whitesource-helm)\n\n- Edit\n[values.yaml](https://gitlab.com/fjdiaz/whitesource-helm/-/blob/master/helm/whitesource/values.yaml)\n\n- In vaules.yaml set `whitesource.ingress` to\n**https://whitesource.INGRESS_URL.com**\n\n\nYou can get the INGRESS_URL from your Kubernetes cluster settings\n\n\n![ingress url\nlocation](https://about.gitlab.com/images/whitesource-integration/base_domain.png)\n\nIngress URL location\n\n{: .note.text-center}\n\n\n- Make sure Ingress is installed.\n\n\n![ingress\ninstallation](https://about.gitlab.com/images/whitesource-integration/ingress_installation.png)\n\nInstalling Ingress\n\n{: .note.text-center}\n\n\n- Install [Helm](https://helm.sh/docs/intro/install/)\n\n- Deploy WhiteSource with Helm template:\n\n\n```bash\n\nhelm upgrade -f helm/whitesource/values.yaml --install whitesource-gitlab\n./helm/whitesource\n\n```\n\n\n## Using WhiteSource\n\n\nOnce the WhiteSource plugin has been installed we can add the `@whitesource`\nuser to the repositories\n\nwe wish to scan. A merge request (MR) with the `.whitesource` file will be\ngenerated automatically.\n\n\nWhiteSource will now scan your repository and generate issues for all the\nvulnerabilities discovered on the main (master)\n\nbranch. These issues will provide detailed information on the vulnerability\nas well as how to resolve it. Some issues\n\ncan even be auto-remediated.\n\n\n![whitesource issue\nview](https://about.gitlab.com/images/whitesource-integration/whitesource_issues.png)\n\nWhiteSource vulnerability issues\n\n{: .note.text-center}\n\n\nEach time a new MR is pushed, a WhiteSource scan will run, and provide a\ndetailed output.\n\n\n![whitesource merge request\nview](https://about.gitlab.com/images/whitesource-integration/whitesource_merge_requests.png)\n\nWhiteSource MR scanning\n\n{: .note.text-center}\n\n\nEach link provided by WhiteSource shows detailed information on the\nvulnerabilities the scan detected:\n\n\n![whitesource web\nlinks](https://about.gitlab.com/images/whitesource-integration/whitesource_advanced_issues.png)\n\nWhiteSource vulnerability information\n\n{: .note.text-center}\n\n\nWhiteSource can be integrated into the [GitLab Security\nDashboard](https://docs.gitlab.com/ee/user/application_security/security_dashboard/)\nso that your security team can manage the\n\nstatus of these vulnerabilites. Access to the Security Dashboard requires a\n[GitLab Ultimate account](/pricing/ultimate/).\n\n\nFor integrating WhiteSource to the Security Dashboard, add the following to\nthe CI.yaml:\n\n\n```\n\nwhitesource-security-publisher:\n  image: openjdk:8-jdk\n  when: manual\n  script:\n    - curl \"{{WEBHOOK_URL}}/securityReport?repoId=$CI_PROJECT_ID&repoName=$CI_PROJECT_NAME&ownerName=$CI_PROJECT_NAMESPACE&branchName=$CI_COMMIT_REF_NAME&defaultBranchName=$CI_DEFAULT_BRANCH&commitId=$CI_COMMIT_SHA\" -o gl-dependency-scanning-report-ws.json\n  artifacts:\n    paths:\n      - gl-dependency-scanning-report-ws.json\n    reports:\n      dependency_scanning:\n        - gl-dependency-scanning-report-ws.json\n    expire_in: 30 days\n```\n\n\nFor more details on the integration checkout [WhiteSource for\nGitLab](https://whitesource.atlassian.net/wiki/spaces/WD/pages/806191420/WhiteSource+for+GitLab).\n\nLearn more at [DevSecOps](/solutions/security-compliance/) and checkout the [Secure\ndirection page](/direction/secure/) for more\n\ninformation on the upcoming features and integrations.\n\n\n## Learn more about application security at GitLab\n\n\n- [How application security engineers can use GitLab to secure their\nprojects](/blog/secure-stage-for-appsec/)\n\n- [Get better container security with GitLab: 4 real-world\nexamples](/blog/container-security-in-gitlab/)\n\n- [How to capitalize on GitLab Security tools with external\nCI](https://docs.gitlab.com/ee/integration/jenkins.html)\n\n\nCover image by [Alexandra\nAvelar](https://unsplash.com/@alexandramunozavelar) on\n[Unsplash](https://unsplash.com/s/photos/snow-capped-mountains)\n\n{: .note}\n",[108,855,232,901,9],{"slug":2502,"featured":6,"template":688},"whitesource-for-dependency-scanning","content:en-us:blog:whitesource-for-dependency-scanning.yml","Whitesource For Dependency Scanning","en-us/blog/whitesource-for-dependency-scanning.yml","en-us/blog/whitesource-for-dependency-scanning",{"_path":2508,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2509,"content":2515,"config":2520,"_id":2522,"_type":13,"title":2523,"_source":15,"_file":2524,"_stem":2525,"_extension":18},"/en-us/blog/why-did-we-choose-to-integrate-fluxcd-with-gitlab",{"title":2510,"description":2511,"ogTitle":2510,"ogDescription":2511,"noIndex":6,"ogImage":2512,"ogUrl":2513,"ogSiteName":675,"ogType":676,"canonicalUrls":2513,"schema":2514},"GitOps with GitLab: What you need to know about the Flux CD integration","Inside the decision to integrate Flux CD with the GitLab agent for Kubernetes and what it means to you.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749678356/Blog/Hero%20Images/balance-speed-security-devops.jpg","https://about.gitlab.com/blog/why-did-we-choose-to-integrate-fluxcd-with-gitlab","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"GitOps with GitLab: What you need to know about the Flux CD integration\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Viktor Nagy\"}],\n        \"datePublished\": \"2023-02-08\",\n      }",{"title":2510,"description":2511,"authors":2516,"heroImage":2512,"date":2517,"body":2518,"category":683,"tags":2519},[765],"2023-02-08","\n\nIn January, [we decided to integrate Flux CD with the GitLab agent for Kubernetes](https://gitlab.com/gitlab-org/gitlab/-/issues/357947). [Flux CD](https://fluxcd.io/) is a mature GitOps solution and one of the market leaders in the area. We have since decided to make Flux CD our recommended approach to do GitOps with GitLab – previously, the agent for Kubernetes alone was the recommended approach. Let's discuss what this change means for current users and what our plans are for the integration.\n\nFirst of all, let's remove the most worrying thought from the agenda: We are not deprecating any agent for Kubernetes functionality at this point. The GitOps offering remains fully supported and transitions to maintenance mode. We plan to deprecate it with at least one year of removal time once we consider the Flux integration solid. As a result, the removal is unlikely before the GitLab 17.0 release, which is expected in 2024. We are looking into providing tooling to facilitate (or automate) the migration once the time comes. If you use the agent for Kubernetes for GitOps, you don't have to do anything at this time.\n\nThis change does not affect the agent's other non-GitOps functionality either. The [CI/CD pipeline integration](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html) and [operational container scanning](https://docs.gitlab.com/ee/user/clusters/agent/vulnerabilities.html) remain intact, and we will continue investing in them.\n\n## What to expect from this change\n\nFrom now on, instead of building our solution for GitOps, we will focus on supporting Flux and improving its user experience when it is used together with GitLab. Flux CD will become the recommended tool to do GitOps with GitLab. Initially, we will provide documentation on the Flux setup we recommend for our users while we focus on building out various integrations.\n\nIn terms of the integrations, we are looking at providing a UI built into GitLab. You might also be able to use the UI with other tools, including the CI pipeline integration of the agent, but it will work best with deployments managed by Flux. Besides the UI integration, we want to streamline Flux's access management. Flux accesses GitLab through the regular GitLab front door. As a result, it needs to authenticate with a token, requests might be rate-limited, and, in general, it does not seem to be the most efficient way to do its job. We plan to simplify this for our users to avoid the necessity of managing dozens of deploy keys and to decrease the load on GitLab at the same time.\n\n## Why Flux?\n\nWhy did we choose Flux CD instead of something else? We evaluated several options. There are other open-source GitOps tools. The biggest contender was [ArgoCD](https://argoproj.github.io/cd), another mature Cloud Native Computing Foundation project in the GitOps space. ArgoCD is a full-featured product for GitOps, while Flux is a GitOps toolkit. While we like and value ArgoCD a lot, we think it does not lend itself to integration with GitLab.\n\nAs we are already in the process of building out UI integrations with the cluster, we know how the GitLab UI will be able to reach the Kubernetes API. Flux relies on the standard Kubernetes API 100%, so we can easily integrate it into our UI access approach. Relying only on the Kubernetes API is a significant benefit over ArgoCD, which provides a custom API.\n\nBesides going with another tool, we evaluated the work needed to build a competitive, in-house solution. We found in-house development is the strongest contender to Flux CD, and while it was very compelling, we decided to go with the integration instead. We believe this should give our customers more value faster than a custom solution. Moreover, it should enable existing Flux users to benefit from our integrations with minor modifications in their usage patterns as we roll out the integrations.\n\n## What comes next?\n\nFirst, we want to [document our recommendations for using FluxCD with GitLab](https://gitlab.com/gitlab-org/gitlab/-/issues/389382). At the same time, we will change our GitOps documentation to recommend Flux instead of the legacy GitOps solution. We consider these the most important steps to minimize uncertainty and set you up for a successful start.\n\nTogether with the above, the team is working hard on shipping the first version of an [integrated Kubernetes UI](https://gitlab.com/gitlab-org/gitlab/-/issues/375449). We are starting with an environment overview and build an [entire Kubernetes dashboard](https://gitlab.com/groups/gitlab-org/-/epics/2493) as part of GitLab. The cluster UI integration will enable GitLab users to learn more about their cluster state without leaving the GitLab UI and should allow a nearly real-time view of GitOps deployments using Flux CD.\n\nWe have clear ideas on how to do what I described above. We are still researching and learning about many other topics, including [how to simplify Flux best accessing GitLab](https://gitlab.com/gitlab-org/gitlab/-/issues/389393). If you have experience using Flux with GitLab and have any feedback, recommendations, or requests on what the integration should support, we would like to hear from you. Please, reach out to me using [my GitLab profile](https://gitlab.com/nagyv-gitlab).\n\n## The Flux community\n\nBefore I close this article, I would like to say hi and thank you to the Flux community. We already got invited to the Flux development meeting, and the core team was very welcoming. As we always actively contributed to the core tools – first [`gitops-engine`](https://github.com/argoproj/gitops-engine/), later [`cli-utils`](https://github.com/kubernetes-sigs/cli-utils/) – supporting our GitOps offering, we are looking forward to contributing to Flux CD.\n\nWe are looking forward to working more closely with you. Thank you for building this great tool and community!\n\n_Disclaimer: This blog contains information related to upcoming products, features, and functionality. It is important to note that the information in this blog post is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned in this blog and linked pages are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab._\n\nRead more:\n\n- More about the [Flux CD integration decision](https://gitlab.com/gitlab-org/gitlab/-/issues/357947) \n- Docs for [agent for Kubernetes](https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html) \n- Issue on [our current focus](https://gitlab.com/gitlab-org/gitlab/-/issues/389382) \n- Preparation issues: [Flux to GitLab access management](https://gitlab.com/gitlab-org/gitlab/-/issues/389393) and [Visualizing Kubernetes resources within the Environments page](https://gitlab.com/gitlab-org/gitlab/-/issues/375449)\n\n",[539,9,685,1004],{"slug":2521,"featured":6,"template":688},"why-did-we-choose-to-integrate-fluxcd-with-gitlab","content:en-us:blog:why-did-we-choose-to-integrate-fluxcd-with-gitlab.yml","Why Did We Choose To Integrate Fluxcd With Gitlab","en-us/blog/why-did-we-choose-to-integrate-fluxcd-with-gitlab.yml","en-us/blog/why-did-we-choose-to-integrate-fluxcd-with-gitlab",{"_path":2527,"_dir":245,"_draft":6,"_partial":6,"_locale":7,"seo":2528,"content":2534,"config":2539,"_id":2541,"_type":13,"title":2542,"_source":15,"_file":2543,"_stem":2544,"_extension":18},"/en-us/blog/aws-reinvent-recap",{"title":2529,"description":2530,"ogTitle":2529,"ogDescription":2530,"noIndex":6,"ogImage":2531,"ogUrl":2532,"ogSiteName":675,"ogType":676,"canonicalUrls":2532,"schema":2533},"Highlights from AWS re:Invent 2018","Catch up on what GitLab got up to at AWS re:Invent last week! Reinventing pipelines, emerging as a single application, theCUBE interviews, and more.","https://res.cloudinary.com/about-gitlab-com/image/upload/v1749679994/Blog/Hero%20Images/aws_booth_2018.jpg","https://about.gitlab.com/blog/aws-reinvent-recap","\n                        {\n        \"@context\": \"https://schema.org\",\n        \"@type\": \"Article\",\n        \"headline\": \"Highlights from AWS re:Invent 2018\",\n        \"author\": [{\"@type\":\"Person\",\"name\":\"Priyanka Sharma\"}],\n        \"datePublished\": \"2018-12-06\",\n      }",{"title":2529,"description":2530,"authors":2535,"heroImage":2531,"date":2536,"body":2537,"category":300,"tags":2538},[1082],"2018-12-06","\n\nLast week GitLab was at AWS re:Invent 2018, the marquee event for cloud computing in the US. As the frontrunner in the space, Amazon has built re:Invent to be a juggernaut. This year it commanded most of the Las Vegas strip and had over 50,000 attendees. As a first-time visitor myself, I was impressed by the sheer scale and efficiency of the event. I was also thrilled to achieve my personal goal of giving my first talk with a live demo using code and GitLab. As for GitLab, we saw that our company emerged as a leader in the DevOps space with a single application for the whole software development lifecycle.\n\n## Highlights\n\n### Reinventing CI/CD pipelines\n\nOur CEO [Sid Sijbrandij](/company/team/#sytses) and I did a talk and live demo about reinventing CI/CD pipelines using GitLab, [Kubernetes](/solutions/kubernetes/), and EKS. This was our first hint that this re:Invent was going to be special. The talk was bursting at the seams with attendees, as we shared both the challenges of the toolchain crisis engulfing our ecosystem, and about how a single application for the entire DevOps lifecycle can make an improvement of over 200 percent in cycle times. You can [check out the presentation here](https://docs.google.com/presentation/d/1x1g4pfpoaav9lhcYkjAJylLMl-9S0JFTeKXlNF98O-I/edit?usp=sharing).\n\n![Sid Sijbrandij and Priyanka Sharma on stage at AWS re:Invent](https://about.gitlab.com/images/blogimages/aws-2018/aws_2018_sid_talk_stage.jpeg){: .shadow.medium.center}\n\nThe demo, which showed us running a CI/CD pipeline and deploying code to Kubernetes on EKS, is an example of the [cloud native workflows](/topics/cloud-native/) users can push via GitLab. It is such competency that makes Kubernetes on EKS a breeze and is the reason GitLab was awarded the [AWS Partner DevOps Competency Certification](/blog/gitlab-achieves-aws-devops-competency-certification/) to confirm our viability and excellence as a DevOps solution for companies using AWS Cloud.\n\n### Validation for our vision\n\nOur experience at re:Invent was one of validation and emergence. As a company, we saw that our efforts to build the first single application for the entire DevOps lifecycle have paid off and our users resonated with our message. Most folks who came to our booth were aware that GitLab played a part in multiple stages (if not all) of their workflow and many were avid [GitLab CI](/solutions/continuous-integration/) fans. Gone are the days when [version control](https://docs.gitlab.com/ee/topics/gitlab_flow.html) was the only thing GitLab was associated with.\n\n![Collage from GitLab at AWS re:Invent](https://about.gitlab.com/images/blogimages/aws-2018/aws_booth_collage.jpeg){: .medium.center}\n\nOur VP of Alliances, [Brandon Jung](/company/team/#brandoncjung), [appeared on theCUBE](https://www.youtube.com/watch?v=Ejs5xGAhL8s) with a company called Beacon. As the former head of partnerships at Google Cloud, Brandon has a long history with GitLab. He has seen the company grow over the years and shared how our rocketship ascent across the DevOps lifecycle convinced him of the potential. He said, \"In just over two years, [GitLab became the frontrunner for continuous integration](/blog/gitlab-leader-continuous-integration-forrester-wave/), according to Forrester. That's impressive.\"\n\n### Livestream with The New Stack\n\nI also represented GitLab on [a livestream podcast](https://www.pscp.tv/w/1eaJbODAepnxX) with [The New Stack](https://thenewstack.io/), [Matt Biilmann](https://twitter.com/biilmann?lang=en), CEO of [Netlify](/blog/netlify-launches-gitlab-support/), and [Joe Beda](https://twitter.com/jbeda), founder of [Heptio](https://heptio.com/) and creator of Kubernetes. We discussed GitOps, NoOps, and the toolchain crisis. As Matt wisely said, \"Trust in open source is critical to cloud computing and the ecosystem. Companies like GitLab will keep the players honest.\"\n\n{::options parse_block_html=\"false\" /}\n\n\u003Cdiv class=\"center\">\n\n  \u003Cblockquote class=\"twitter-tweet\" data-lang=\"en\">\u003Cp lang=\"en\" dir=\"ltr\">GitOps, NoOps and the tool chain crisis. \u003Ca href=\"https://t.co/mtfm8OaYYD\">https://t.co/mtfm8OaYYD\u003C/a>\u003C/p>&mdash; The New Stack (@thenewstack) \u003Ca href=\"https://twitter.com/thenewstack/status/1067881587214184448?ref_src=twsrc%5Etfw\">November 28, 2018\u003C/a>\u003C/blockquote>\n  \u003Cscript async src=\"https://platform.twitter.com/widgets.js\" charset=\"utf-8\">\u003C/script>\n\n\u003C/div>\n\nWe thank AWS for creating this amazing ecosystem of end users and practitioners who came together in Vegas last week. Next year will be bigger, better. Until then, see you all at [KubeCon](/events/)! 😃\n",[814,268,923,278,1004,9,1288,835],{"slug":2540,"featured":6,"template":688},"aws-reinvent-recap","content:en-us:blog:aws-reinvent-recap.yml","Aws Reinvent Recap","en-us/blog/aws-reinvent-recap.yml","en-us/blog/aws-reinvent-recap",11,[668,693,714,734,755,777,798,822,842],1756472655681]