Public “Office Hours” (2019-12-11)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-12-11.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Let's get this show started.

Welcome to Office hours today is December 11th.

2019 and my mom's birthday.

Happy birthday mom.

My name is Eric Osterman and I'll be leading the conversation.

I'm the CEO and founder of cloud posse.

We are a DevOps accelerator.

We help startups own their infrastructure in record time by building it for you and then showing you the ropes.

For those of you new to the call the format of this call is very informal.

My goal is to get your questions answered.

Feel free to own yourself at any time you want to jump in and participate.

We'll host these calls.

Every week we automatically post a video of this recording to the office hours channel on suite ops as well as follow up with an email.

So you can share it with your team.

If you want to share something in private just ask me and we can temporarily suspend the recording.

With that said, let's get the show started.

I do want to cover some things that we were just talking about before we kicked off the call here, which is on efficient ways to track your time as a contractor.

If you're doing that type of stuff.

But also here's some other talking points.

If we run out of things to cover a public service announcement here it was AI believe that brought it up again and reminded me this morning.

The official hymn chart repositories slated for deprecation in about a year and there's a schedule announced for that which is on this link.

I want to show you our new slack archive search functionality which makes it a whole lot easier to find what you're looking for in the suite office chat we use.

I'll go live for that I want to show our new guest modules for two kinds new managed node pools for each case as a result of the announcements from reinvent and then one of the things Andrew also asked about were like what are the considerations that we take into account when thinking a CIO platform and I expanded this to be set out.

But let's just quickly jump out 2 this thing about tracking time.

What was that cube called that you were talking about.

Andrew it's called time M��ller so you get the app.

The app is free but you can setup.

I think it's got eight sides or six sides or one to three it's got eight sides so you can set up eight different eight different tracking targets whatever they are whether they're different clients Lauren Berman or no I don't.

I didn't buy the I didn't buy the physical device yet but I have.

I'm using the app oh that just lets you quickly tap on.

So I set up I set up three I set up I've already set up three targets one is my one project that I'm on and the other project they run and then the third one is overhead, which is like time that I spend that I can't charge to either project and so I can hit Start stop on any of them as I'm working through the day because I like this morning I started to work I started work today this morning it I don't know a 30 and I spent an hour just plowing through emails and I thought and I stopped myself and I'm like how am I going to charge that last hour that I spent because it was like an email from this place and then an email from this place and like I didn't know.

And so I was like, all right I've heard about these devices and stuff.

So let's look into them.

And so this app is free I'm using it now I'm thinking about buying the actual device because then you don't have to look down at your phone you just pick it up and move it to another face and it automatically sticks with your phone with Bluetooth.

The device itself is like $69 or something but you know hey it's Christmas I wonder if there's some way to integrate that then with either zap beer or harvest directly because we've been using part of this to track time.

Currently, we offer the following integrations toggle Jira harvest.

I call Google Calendar and outlook calendar cool.

And then who was just talking about walking time walking time has come up before it's me Yeah.

So I use it to basically track however my projects.

So Brooke's great but I don't have to do any serious billing.

It's just like OK most of us at a time of day on this project and had to do so.

And so.

So walking time is kind of like self-inflicted spyware for consultants.

It keeps track like what files you're editing what things you're doing so you can back that out to reconstruct your time as well as track your time at the same time.

Well for developers.

It looks pretty good.

Yeah it's as if lots of integrations I mean also into trade.

So if you're emails I think if you're all right I'm there.

But it's a cool program to just track like roughly how much time you spend.

Anyone else try walking time to know mark champ schemes and using that.

Cool well let's shift the focus back to DevOps first I want to see anybody have any questions related to Terraform communities home et cetera.

Well when we jump right into your question, Andrew then you had asked me just before we kick this off kind of what are some considerations that we took into account when evaluating CCD platform and I just jotted down some notes before the call.

So I would forget it.

But before I talk about this I just want to say that when we started working it was like four years ago and yes there was a rich ecosystem of CCD.

Then and I think that ecosystem has 10x since we started.

It feels like everyone's pet project is due inventing US ICD platform.

And that's why more than I could possibly come to evaluate today.

So I think there are a lot of more exciting options available today than when we started looking at things.

And here are some considerations.

I think are basically requirements today.

So everyone can see my screen here one big thing I think is approval steps since most companies still don't feel 100% confident with continuous deployment to production for example, having a manual gate there.

Ideally with some level of role based access controls who can approve it would be great.

Now since we live and die and Slack having that integrated it's a slack to me feels like a natural requirement as well because I just want to whisk it.

I just want to get a team.

When my approval is required kind of like pull reminders does for pull requests shared secrets.

This is like a Biggie for me and one of my biggest complaints about like GitHub Actions is that there's no concept of shared secrets right now.

So when you add like 300 projects like cloud policy and you have to have integration secrets that are reused across all of them.

How do you manage that.

If you have to update each one.

So we end up having to write scripts to manage the scripts that manage the scripts the of.

So scripting basically updating your shared secrets across your GitHub repositories nutcase.

I would rather have an organizational thing where maybe teams under GitHub could have shared secrets.

I think would be a pretty cool way of doing it.

Easy personalization.

I think you get this pretty much out of the box these days when we started circle C I for example, didn't have that.

Now they do circle to other things like easy integration with Kubernetes.

I don't believe for example circle s.I. has this if you're a big time user communities.

I don't want to spend half my pipeline just setting up the connection to the Kubernetes cluster.

I like that just to be straightforward.

So this was one of the pros why we selected code fresh code fresh makes that really easy because it has a bunch of turnkey integrations to the most common systems we'll use.

What else container back steps again.

When we started this was not the standard.

Now this is the standard.

Every CI/CD pipeline basically use the steps backed by containers.

So I think that's fair.

Supporting web hook events from pull requests originating from untrusted forks.

This is a Biggie.

If you do a lot of open source how do you handle this securely.

And the reality is you can't get by like Travis c.I. will run the pipeline if you don't have secrets in it.

But these days you can't do anything without secrets so that's not a fair restriction.

So there needs to be a secure way to do that.

Typically, it's done these days using chat ops where authorized individuals can comment on the pull request and then that triggers some events.

And in the act of doing that will actually temporarily give that pipeline access to secrets that could technically be expo traded but it's a necessary risk to take.

And just set one of those up and one of my open source projects and it was a frickin' breeze.

Q did you do that with code for sure.

Right Yeah.

So Yeah it should be a breeze.

And technically that is a breeze in code fresh.

I will say that I've had a lot of issues.

But I think that's because our account has been grandfathered in through so many upgrades that are our something's wrong with our pipeline.

So we constantly have to reset our web hooks and stuff.

But Yeah, it is technically very easy in code for us to do that.

And then you just look for a slash command or something in your trigger.

Yeah let's call other nation's natural segue into this chat UPS our request.

So like I said there if you trigger everything you want to check on a pull request automatically your you'll quickly fill up your pipelines your integration tests are going to take forever.

So I think being able to do a conditional type stuff on labels and comments it is a requirement from my system.

It should make it easy to discover all open PR so it's easy to trigger this is why things like so Jenkins does this right out of the box.

But we're not out of the box but with what.

I forget what the plug-in is for Jenkins that does that.

You're using that though right.

Get a plug-in.

Yeah Yeah Yeah.

Couldn't fish had this but they just deprecated that view.

So you can no longer see all of them pull requests and re trigger the pipelines for that.

I think this is a requirement because Paul requests have additional metadata and that noted that you can't sit you can't easily simulate that just by triggering the c.I. build on a branch because you lack all that other metadata about the pull request.

So a way to trigger pull requests that way to trigger builds or jobs from all requests is a requirement manually in addition to automatically should support remote debugging.

So this has been one thing like the road for us, for example, that hasn't been a possibility until recently.

Now they've added support for that.

I here really good things about it.

Basically you want to be able to remote exact into any step of your pipeline for triaging because God knows soil tedious to debug things if the only way is to rerun the entire pipeline from beginning to end and not be able to jump in or look what's going on inside of that container circle circles has been on me about that for a very long time.

There's been tricks we've had to use like teammate to get around that in the past.

So making sure that that's possible.

Question for those of you using Jenkins as Jenkins support that with any plugins today.

Now you may be able to reach into a container that you get into a step in a pipeline.

No it's weird I get it.

I'm sorry.

Oh no you go ahead sorry.

What I've usually done in the past.

So right now, I've got a I've got Dr. builds and I'm using the multiple branch pipeline from Jenkins.

Yeah you usually do as I try to adjust if I'm doing multi container builds I'll try and keep it in a format that I can easily recreate it locally.

So you just try to recreate the same bug then money into Jenkins.

And I haven't had many issues there.

There are a couple of nuances with the chicken on plugins but as long as you don't update Jenkins you're OK which isn't always ideal but it's kind of sad reality I guess in open source.

You bring up but you jog my memory for another thing should support local execution of pipelines for debugging.

So this is pretty cool when you want to iterate quickly and be able to trash things is just to be able to run that pipeline through locally code fresh has had support for this for some time now.

I don't know about circle or Travis.

I don't get it.

Actions you can run locally or at least people have come up with workarounds for that.

Jenkins does that support local pipeline executions.

I would run a local instance of Jenkins to do it.

I've you laugh but I've done that and it works fine.

I know it's just I get it and it works.

Given our demographic of who we are and what we do.

But I wouldn't expect my developers to rattle like Dr. Campos up.

Jenkins and then I the integrations for that and then I guess there's I guess we could perhaps elements Della Jenkins you're not running local Cuban edits is not enough.

That's the good conversation to have.

I would love to debate that with anybody else who is and learn from it.

OK Yeah.

I've actually seen a GitHub action to run a local company just inside of GitHub action which model that will give us tonight.

But we're not doing ridiculous in a flattering way.

I mean that's somebody I spent a lot of time around.

Yeah derailing the conversation but we're not doing it yet but we plan on doing using kind k in yen which is Cuban using Docker in order to test.

Right now we are developing reusable helm files like similar to have cloud plus he has a bunch of reusable files.

Obviously we're like our mono chart like a reusable chart or a reusable helm files and files.

And we need to test them.

So our first iteration is just going to be you know smoke test it home file apply and then wait and then help file destroy you know it just if those work successfully then we're going to call the test.

Yeah successful.

So right now we're just going to deploy into our Kubernetes cluster, but we don't want to dirty the cluster with that stuff.

So we're going to use kind always should be really cool for that kind of show before all this is where things like spiral out of control gets worse standard charts it's true but like what if you had a chart that wanted to test external DNS and then you need the you need the iron rule for that.

You know there are limitations.

The charts for the charts we're working on right now are just services like we're working on a GitLab and a Jenkins panicky cook has any links to the most common thing that we end up needing to use Terraform for in combination with like Helm is for the backing services of chart dependencies of which roles or the most common.

Have any of you seen anything that auto provisions roles somehow as part of Kubernetes like a C or D for animals you can use to Terraform Cubans for better I think and Atlantis to do that.

But what about it.

Their form could be exactly the one by rancher.

I guess there's a few now.

Yeah I'm I'm not doing that.

I don't think I would work because of the debug implications of c or D. I think it's a beautiful beautiful in principle what I want is a better way to choose visually.

Basically what I think I would want is kind of like a circle Travis code fresh slash UI kind of for looking at the CRT is executing.

So there's like a window into that system specifically for this year.

This link relates to flux as well.

I haven't used flex firsthand but my understanding is it's also suffers from some of this visibility to debugging things when things go everything is great when it works but like when you're developing things never work.

So how do you do that.

Just quickly finish up the remaining things on this bucket list of things for conservations risky platforms making it easy to tag multiple versions of a Dockery image.

It sounds obvious or trivial but I've seen systems which literally require you to rebuild that image for every tag you want to have.

So being able to add in tags to an image I think is valuable as well.

For example, we always like to tag the commit shop of every image.

In addition to perhaps the release shop.

Not really shot the release tag.

All right.

Moving on let's support pipelines is code.

Basically the ability to declare your pipelines alongside your code itself and then ideally, the ability for whatever system you're using to auto discover those pipelines.

So GitHub Actions is a prime example of that where you don't need to click around anywhere to get those actions it's working.

Let's see should support a library of pipeline or pipeline steps.

So this has also become very popular these days.

Code for ash has their steps library get up actions I think.

Does this really well.

What platform is it that has orbs as that circle you know has orbs.

So that concept I think, is really important.

So related to this and there's few of you on the call using Jenkins.

I know there's.

I know this is possible in Jenkins as well but there's a lot of my senses I haven't heard anybody really talk it up or a lot of people dislike having this central library of like declarative pipelines within Jenkins and that it leads to management or software on the lifecycle challenges and instability.

Is that your feelings as well.

If so can you elaborate on that and what.

Why does Jenkins fail when circle C I put fresh types seed on this so I'll answer that more generally.

Tools that let you do whatever the hell you want.

Fail because people do whatever the hell they want.

I like tools that provide an opinionated way to do something because then it's like this is the correct way to do it.

So do it this way people do.

Jenkins shared libraries in lots of different ways.

Some of them work better than others.

So there is a way if an organization lays down the law on how to do it and stick to that it can work well just like works.

So like I actually just posted a link to some of our job yes.

Code for ethics.

Because a lot of our Jenkins is open source because we have alternate media using our Jenkins please.

Look at it.

But we tried to create guidelines in this is like create pipeline so a lot of it.

Jenkins job yourself lots of Groovy.

We're all soft offers.

You get a lot of let's over abstract our job yourselves so we input classes in Groovy which is super fun.

One of the big issues that we found is before my time here is that it's just like a whole lot of code.

And people like deviate from those.

And as the standards change over time you end up with series a through Z evolutions of it.

Yeah it's hard to keep everything all together.

And that prompted some of our assessments.

When I look I looked at code fresh I kind of rolled it out a little bit.

But we have like an hour ago and a junk c.d. and a Jenkins X 50 halfway spun up and I'm curious because I push my team on being prescriptive.

That's a good thing.

Jenkins X is so prescriptive that you can do it only one way versus Argo Argo underneath Argo.

You can do whatever the heck you want it's an arbitrary code execution job platform.

But Argo seat is a little bit descriptive and only does the c.d. portion of it.

So you guys how do you pick the right amount of prescription.

Example I'll always give is angular versus react no angular has a.

This is the way it should work.

You know it's much more prescriptive react is more do it however you want to do it you know you want to do it in just vanilla JavaScript great.

Do what you want to do it in typescript great.

Do it.

That's the example I usually give.

Not to say that either angular or reactor better than the other.

I'm not saying that but that's the example you should be using.

We as a thread go city and Jenkins back and forth.

And they just announce their deprecating my not that great they just announced that community would be taking over maintenance of the project.

So whatever.

Yeah and we have there's of course the library in Python that Alex is maintaining that's a pile of Python scripts to manage pipelines inside of that you can implement a yellow parser inside of that Python.

So we've had pipelines for a long time.

There's much better ways to do what we did.

But at the time that was what was a good option.

But I'm looking at Argo how you do everything in it manifesto basically.

I'm liking that that format but I'm seeing limitations because Argo city is much more of a narrow focus.

So for us to use it we're going to have to plug and play like six different components instead of with Jenkins X squared, we would have everything all in one.

We have nothing written in angular and everything written in React so I guess we've made the decision that we prefer to have flexibility, but from that analogy but Yeah I'm trying to figure out the right pieces because there's a several parts here considerations that I don't have answers for in our city.

And so it's a very timely discussion because the bank has made sure this was last week.

So I would love to follow up on this.

If you join us in a subsequent office hours on what you find because Argo has been in our kind of evaluation bucket for technologies as well.

One of the things I find like the approval steps I don't find in a lot of these things is a easy thing to implement.

OK Yeah.

Integration was slack.

Pretty much everything does that these days but in varying degrees.

I would like to say I must integrate with slack and have control over the content of the messages.

Basically there's a lot of systems they'll integrate with slack but they just spew generic messages that are kind of irrelevant for half the people.

So being able to target these messages to groups or teams and then also customize what shows up in the messages I think is something important.

And then support get up deployment notifications.

Now we are a get up shop.

We use it up extensively but I get how it has a special dialogue now where you can see where a request has been deployed.

Tying into that somehow I think is a nice thing.

And then lastly on pricing if you're going for a SaaS solution I really think that the solution should be fair to smaller companies not just enterprises support a standard price per user model and unlimited builds.

Basically the product shouldn't limit how much you want to use it and it should provide a reasonable escape hatch for going over those overages without forcing you into an enterprise locked in so that that's my list.

Any other considerations.

I overlooked that since we have a bunch of people here and a lot of those seating experienced that's a show.

Well Yeah it's a lot to ask about medication year well there once that when coach rush doesn't do so hot out because they get behind the wheel only with enterprise enterprise tax enterprise.

Yeah there is a perfect plug for the B SSL wall of shame website again.

What is it what was that.

It's the soda tax or I think Salvia whatever does someone of you know the tool CI/CD de tool that allows you to include the text of the approval step.

Like for example a default see how I'm charged or the terror from plan or something.

I really would love to f but I thought it was part of the question one more time I just did it here.

I would love to have a tool that allows me to see the approval as a terror phone plan in the approvals that like include that text.

Oh Yeah Yeah I get it.

Yeah that would be cool.

So like it slacks to the chair for most simple channel and then there is a approve or reject kind of button below that that'll be really good.

Yeah for example.

I don't mean bit apartments do this really well because they basically collapse below each other and then you take a proof.

Basically you see the step from before that.

So basically you have some contacts in the eyes.

I side one sexy eye it doesn't really badly because if you want to improve you have to do it from a different view than the actual one of the top, which I really, really dislike because let's be honest there's a lot of people that don't really look at their circle see eye to see a single step and just approve because I have to so we have a we have an implementation that we did by hand.

This was some time ago for a code fresh demo.

What's cool about this implementation that we did by hand is it's just using custom steps so it's you could implement this in any pipeline that lets you run containers.

And it does more or less what I think you're saying.

So this looks familiar if you're using Terraform from cloud or Atlantis.

It looks familiar but I think it's a little bit more polished.

And basically what we do is we have we're using the cloud posse GitHub commenter so cloud posse slash get commenter tool and then we have a number of templates that are in this testing that cloud posse dot Dockery reposts if you want to borrow that but then we output this plant.

So I thought this was really clean here right.

This is I feel like it's been a perfect example of what you're asking for.

I would like this now not only as a get up comment but also as a Slack message with the buttons at the bottom.

But it suggests also we stripped out all of my screen hijacked and then and then we stripped out all that superfluous information using scenery.

Now this was done as you can see back in April before Terraform Xerox.

Well I don't know scenery as updated support for Terraform 0 to 12 yet but Yeah.

So then clean output and then just clicking on this or this takes his code fresh and I can approve it or reject it right from here OK.

Yeah that's clean.

I was thinking like how do I approve now from that Because people will in ultimate up on whatever tool you are using and don't see the context.

So you have to kind of force them to actually click on these buttons.

Yeah which I would really follow like actual feet the context of what you are proving is super important.

It is because we never check it out locally and try it.

And we need to.

And then obviously a little bit of reward at the end if your plan is successful or you apply a successful any.

Right it's beautiful.

Thanks Yeah.

Any other considerations.

I left out here in this list afterwards so you guys can use it to form your requirements.

Did you have other platforms like like OS or so this has not been a requirement for us but it's a totally reasonable expectation for others.

You should support multi OS platform.

Well Yeah it's my request is Windows central Android.

We also have AI don't know if this requirement is a good thing or a bad thing but for city who can trigger a rollback from Gucci for example.

So we have a whole bunch of our back access for all of our pipelines.

I find it less helpful than helpful.

But some people require our back.

Yeah e.g. for approval step so I mean that's somewhat related to approval steps because approval steps mean nothing if you don't have our back or a attribute based results.

Yeah I was just trying to figure out approvals steps for our city because in our case here They don't exist.

I have not figured it out but there is a diff in Argo where you can see a difference that could produce manifest files that I haven't actually used it for valuable things.

But it's the only tool that I've seen natively show like every Cuban entity object in a UI where you click on it and see the difference between different places like the like.

Well you should talk to our friend here.

Here he is.

He has something you might like.

And then also help.

Def right.

So are you guys using charts for your releases.

Yeah well while we have helped me it's broken so I looked at home file.

We have some deft deficit and things locally.

However Argo is what templating expanding out the files.

So we're not actually in the Argo PSC which is only where it's not in production right now.

You can't run defgen in production because there's no tiller.

But I'm still sorting all that.

Is where why did you mentioned file in that.

Oh because help file is basically an automation tool around helm and that is and help file requires this to function.

And what I like about it is if you're using held file you basically have a Terraform workflow for dealing with home charts.

So there is a plan phase and an apply phase with a whole file.

But it doesn't.

So I'm currently using files to document what version of what chart is installed for a non application stuff.

So for a chart museum for manager those things.

But it's not working and I haven't bothered fixing it for some that stuff.

So it's not very production ready on that but we are thinking about killing it and moving everything over to Argo or whatever we do for 3D such that you don't have to manage things in cheaper places.

Obviously that would lose some functionality that you're mentioning that the depths of the competitive stuff but trying to understand what the pros and cons of each would be.

Let me also show you one thing that I really love about using a file as part of party workflow.

So we always bring this up like the four layers of infrastructure you have a foundational infrastructure at the very bottom next one up is your platform next one is your shared services and next one is your applications and they're back end services and usually you treat each one of these independently.

So obviously how file works really well for your shared services like the for deploying all the things your platform requires.

Things like external DNS cert manager, et cetera, et cetera.

But it's also really awesome for layer 4 because in our case like so here's our example app where we showcase a lot of the release engineering type stuff for workflows that we use.

If you go into the deploy folder here is an example.

And this shows now.

So this example app has these dependencies and now how file also supports remote dependencies.

So what's cool about that going back to my earlier requirement about supporting like a shared library.

What's cool is you can create a library of services that use applications use and use version settings point to them.

So what I don't like about my example is I'm using local references here but just imagine that these can be remote and then and then inside of the releases folder the developer can define how his application is deployed.

So this is just so it gives the developer declared way to deploy their applications and it becomes even more declarative if you use something like a modern chart which is our chart here and the motto chart is basically a help chart as an interface that you can use to deploy.

99 percent of apps that you're going to be deploying on Kubernetes without needing to write a custom Helm chart.

That's enabled because when you use hell file values becomes your DSL for your declarative interface.

In this case for in your applications.

So since we're systems integrators and we can't we need to be more.

We want to be opinionated but we need to be general enough that it supports all our customers use cases we because we support quite a lot of variations of configuration in our mono chart.

But if you are you building this for your own company you don't need all these variations and you should standardize how you deploy your apps to Kubernetes and then reduce the permutations that are possible there.

Going back to Andrew's comment about if you give everyone the option to do anything, they will do everything.

So you want to eliminate that and this is how you can do that.

So you just have a policy hey you use user use Mongo chart.

Any version you want but that's cool because now you can just look at the interface that you're using of mono chart here.

In this case of version zero 12 et cetera.

So I have to go ahead.

What we want and what we have currently deployed our super different.

But if I focus on what we want and how we think we want it.

This is super useful because I haven't looked at your example I picked off one of our versions.

We Stole a bunch and simple stuff really fun.

Moved away for that tangible stuff into darker and Docker images and set up that you know Travis builds the darker image questions into darker eyebrows here.

Those are all get triggered.

And then at that point there's a new tag on the image, which would then need to go into this chart and get updated.

At first you put the chart in the application repos and we're trying to use a well to the whole formats of how you inherit chart inheritance it's kind of changing around with some of the newer stuff.

So haven't figured that out perfectly.

But do you have an example like this that has like get up set up for when an application change happens and then everything else needs to follow suit.

So yes and no so we also have a.

So here's one nice thing about what we're doing here.

How do I set this up without taking too much time.

So one of the problems I have with a lot of these pipelines that are out there is basically they wake up when they see a new image and then they deploy that image somewhere but that is like one third of the problem.

The other 2/3 is like well what's the architecture that this image gets into.

And then what's the configuration and secrets that I need to run this.

And usually those three things are all tied together.

So deploying an image when the image changes without taking into account those other two things is worthless.

Yet that's how it is demonstrated.

And so many examples out there.

So maybe it works for something people.

I just.

If someone can show me I can't get my head around it.

So our strategy is quite interesting and I call it the helmet cartridge approach.

So I want containers to work like Nintendo video game cartridges and just for the record I mean I stopped playing video games when it was the original Nintendo with Super Mario Brothers.

I so like don't don't ask me more questions about this because I quickly don't know what I'm talking about.

But as really as two days I do.

So I want a Docker containers that I plug into my cluster.

And it just works.

And that means that it basically needs to also habits architecture deployed alongside of it.

So what we've been doing and it's worked out well for the last few engagements that we've done is we bump.

You see this.

This deploy directory is in my application my microservice.

So we're shipping all of this together at the same time.

So this container when we publish it what we can do is we can actually call help file on the help file inside the container and then that deploys the container along with its architecture and image at the same time at a premium.

I don't know I feel like I derailed a little bit there answering specifically or your question there.

Adam So I asked the leading question.

So the appropriate answer.

Yeah I'm still wrapping my head around the right amount because you're right the demos that I've looked at only solid portion of my problem.

Usually the image part of the problem and figuring out how I should thread everything together when there's a million choices is like I can take away.

But I don't know if it's the right way.

So this conversation's helping for me to just think about edge cases that I haven't already thought of.

Yeah so Yeah.

So then this basically answers that.

And so our pipeline for that.

What I don't like about our example app is I did something here that was me experimenting with a way of doing this but I we're not using this particular tool right now for this.

So I had this deploy tool as part of the image that implements like a blue green strategy for the planes container implements a rolling strategy for deploying this container.

My point was more of that is just you see here we're calling Helen file as a step.

And then if you look at the pipeline.

So we ship the pipeline here the example of the pipelines that if you go to the deploy step here here you can see that we are just deploying with Helena file using the cloud tool but you don't need the flights.

Well my point is you could actually just call home file directly.

And that's what we do.

All right.

Let me digress.

Let me pause on that.

Any other questions related to anything we've talked about or something you.

I was going to mention the the the way that most home turds get implemented right now is sad about and that is you know that if you take all the defaults what gets stood up is is like insecure dev.

Oh yes.

Yeah I want to.

I want to start a revolution to flip that you know.

Yeah you forget to set some setting.

You're going to fail up into the more secure space.

You know so that if you set if you don't set any if you don't set any parameters and you take all the defaults what you're deploying is production ready.

I agree with what you're saying in principle.

It's just there's a difference between operationalizing something inaccurate is very different from getting a PRCA and the difference between victory and success and failure is often early quick wins.

So psychologically speaking practically speaking I get why these charts try and be as turnkey as possible that like look in one button.

I get this full on Prometheus monitoring cluster with Griffon and I fully integrate it to everything in wow that was easy.

But yeah operationalizing it is now going to take 80% lot longer because there's so many things and considerations.

You gotta do.

The thing is that if we optimized for the latter which is what we really need to go to production most people would fail because there's so much that has to happen to get it right and it's non-trivial like.

And that's also where you're getting a more opinionated like.

So does that mean we are out of the box require a C So when 99% of the stuff does analysis so and then if you're going to use us.

So how are you going to do us so well.

We use key code you use access somebody else uses whatever this is like the permutations explode that you can't support.

And that's what's so aspirational.

I agree practically.

I just don't see it working.

So the way.

We're coming about it for my team right now because right now we are coming up with hem files that act as we're taking we're taking the approach that you guys have come up with for and maybe you haven't come up with it, but you're using it for your Terraform route modules we're having.

We have help them find root modules where they're literally.

This is the way I want you to deploy production GitLab.

It does require SSL you know and for MVP where we're going to support one sample because that's what we've been doing at first.

We're not saying it requires key cloak.

We're just saying it requires civil rights.

So whether that symbol coming from OK or smoke coming from actor or whatever we care.

Yeah and then what we're doing is saying and this is what I love about him file and I'm learning more and more about him file I haven't used it for that long yet but we're saying we're using the environment's functionality to say the default environment is production and if you say dash dash environment equals dev then it doesn't worry about all that stuff.

Exactly Yeah.

No environments are great for that.

We see people like you.

Let's say we have humility.

I know where we're talking layer cakes here.

Layers and layers and layers.

OK that out of the way.

OK what's cool about health file and environments is environments.

Now let you as a company define the interface to deploy your helm releases, which is in a lot of ways perhaps the final layer.

If you're using help.

This is basically how can we as a company say we want to consistently deploy our apps using Yelp because helm charts have a ridiculous variation on the schema is basically a values no to at home charts have the same interface unless it's developed by your own company.

So help file by way of environments lets you know to find a consistent interface deploy all of your apps that developers can understand.

Yes there are a few downsides to this.

When you want to figure out how to implement a new toggle or whatever you've got to go five layers deep or whatever to figure out where to stick that in that's our predicament right now.

Source leverage.

The goal is like the goal is if one of my people, says helm file apply GitLab and it won't work because it's got required variables but then they add in all the required variables that add that it tells them to add you know because it'll tell them exactly what they're missing.

Once they've added all the required variables and it works it is now the exact prescriptive way that I want them to deploy GitLab which is like oh that's the holy grail right there that is beautiful.

I do like that.

I do like that especially the emphasis you put there on the required variables and that it tells you what you're missing.

For that one thing.

Remember our talking points.

We didn't get through most of them so I'll leave for next week.

One thing I just wanted to point out as it relates to health.

The official help chart repository being deprecated.

There's some interesting discussion brought up this week getting peer you pointed out a bunch of them which is like this is going to reduce the quality of a lot of charts out there because of the lack of automated testing that most people won't implement.

People have different levels of rigor as it relates to maintaining their charts producing the artifacts uploading them not clobbering previous releases.

Basically the ecosystems matter the way their signed packages up there.

There is so many things that go wrong as a result of the official chart repository going away.

Now we all know why that official chart repository sucked.

It was impossible to get a change through because of the levels of bureaucracy and the checks and everything else like that.

So I don't know.

There's a balancing act here.

What I want to point out is one thing is that I think a big part of the success of for example known is that it doesn't require a dedicated repository to manage it.

The fact that you can basically point at any git repository and pull that stuff down and have immediate success I think is awesome.

I think it's a problem that helm has taken this staunch approach that we got to have a managed chart repository which is basically just a glorified HP server.

But yeah why do we need to make it like there are always public source controls and it's been proven to work with NPM.

Why don't we just do something like that.

So the truth is you can using help ins it's non-standard but this is so this is what we've been using now for over a year is the home get a plug-in here that I recommend.

Where is it installed.

So there is a dependency.

This is to install a chart just from a git repo.

Yeah exactly.

We have repo it's this one here.

I'll share it.

I really dig this plugin and it build because I am so good so this plugin withheld file.

You basically have your package.

Jason for 4 Cuban edits you can do also this on the positive I just send in bitch doesn't work with versioning, but you can just specify the URL to the chart.

Which also works you are.

Well, I suppose it's like maybe trying to save Yeah well I get to the tar all but somebody is going to create that will produce that artifact.

Yeah and it works like if you have like a one to one correlation between like GitHub repositories and charts that use the tar ball you were out for the chart and maybe maybe that's actually the answer here is that we should move away from mono repos to poly repos for health charts and then we can just leverage that functionality.

But this is.

Yeah this is imputing up a tarball as a really sort of fact inside your releases.

No no you don't need it because again hub every good hug repo has an automatic Tarbell artifact so it doesn't just go that don't work with the way that would that work with the way that that.

Because when you say helm package like it packages up a turbo in a specific format.

So you would have to have maintain that format on this.

So to say, in the repository it doesn't do anything else so it just zips it up.

So right.

So long as you maintain that format basically the route of your GitHub repo needs to be where the chart became.

Yeah the same format as what it's expecting.

Yeah Yeah.

That could be cool.

I mean it's worked really well for Terraform and tier for modules.

So I think it's probably a good approach for helm charts as well even though so we didn't do that with five possibly charts because we were just emulating what Google was doing originally but now that's going away.

So I guess it's a brave new world.

Well and Google loves model repos too.

Well I don't you know people try to mimic you know their model repo approach and fail at it.

You know there are certain circumstances where it makes sense but you know what makes everything harder.

So let's see here.

So here's an example of a chart.

All you need to do is create a repo called cert like I think maybe to maybe.

So what is it is called Terraform primary provider block.

I think what we should have is probably helping cert manager chart or something or something like that should they.

I think the community should come out with a canonical repository named structure and support that.

Therefore, if you're familiar with the Terraform registry I want to refer back to the x case Casey d.

I posted a few minutes back.

Well there's Maven right.

There's the maiden maybe an object the component library that's also published a prior art in that as well.

A lot of these will have that.

I'm just going to how it works.

I was just going to share this for the terror from module registry that they're very particular on how you name your GitHub repository.

Basically Terraform provider name.

I think we need that for help.

Charts so we get some sanity in the ecosystem.

Is there a similar thing for health hub and how to get something published to help them.

Good question.

It is serious.

It helps set up so you can help people.

So that's a report that gavel in there which works kind of but the quality currently of these reprocess.

Super super super bad.

I mean I get daily notifications of people publishing the same version as the same ditches like this is a mash.

And for me, it's like how can you use that child and not be notified about it and it's still the official help at home which I find super super super risky.

But nobody can have it.

How much does not track source code.

It tracks the package repo where you pub where you push up your tar balls.

Right Yeah.

I just realized Adam I forgot to finish that thought that we were talking about earlier you wanted to see the difference between two versions.

This is what you could integrate with your we've just got to compare.

Yeah here's this is what you could integrate with your solution.

Why am I not getting a diff.

I'm not sure.

Try the other one.

Try not artificially maybe a release will spark some of CJJ.

Images or process to impact mentally.

So they're kind of a reach.

So with you just take on compared and right next to you I didn't get anything they work well we go.

So you compare any two releases of a health chart and see what's changing.

Yeah this would be you bring along something I mean this is a super useful cause like when there's the mondo repos.

There is no you know versions of individual home charts that you can really look at.

So there's a couple of discussions.

I wrote this shot maintainers so there's no enforcement that the shot has to be an open source and I don't have to disclose the source pressing you reproach approach which I find super super stupid because is the asking you or you can just like look at it.

I wrote a recommendation that we implement like files you're in the home that is h like I did with this one Yeah.

But let's see you should probably just get it or you would be interesting if you could somehow partner with hip hop.

It seems like a perfect synthesis of what I'm writing with them but the interest was kind of low.

Really I don't know.

Yeah Yeah.

I wrote a lot of messages for them but the interest seems to be Well I'm kind of there but not really.

Let's see because I would like my opinion like a rapper should be like they should be at least a central not every service like the digits that should be shown upload them but its age should be checked against.

So if you go like at least some kind of checking like it's a great package just being a store to upload them that age and it just changes that you get an alert and they don't see a line.

I agree.

Something like oh Yeah Yeah there's lots of things to do but Yeah let's see whipped cream screen.

I put it there.

What I think is kind of like what should be done but let's see how it goes.

Well all right.

Any last thoughts before we wrap things up today.

I'm starting to get interested in building stuff if anybody like has an initiative going on around that like AWS building.

Yeah all right.

Yeah we should probably dedicate had more time to cost optimization techniques in the US sometime soon.

We pay for courthouse and they have a communities thing that splits open up but we're not billing to different teams because it's more of a let's make sure we don't have a bunch of snapshots that need to get deleted and a bunch of instances that are not right sized.

But I'd also be curious how to apply billing to communities.

It's also all of this stuff to do in get us to.

Yeah Yeah.

OK go.

I'll make a note of that and cover that on our upcoming office hours next week.

All right.

Looks like we've reached the end of the hour and that wraps things up for today.

Thanks again for sharing everything.

I learned a bunch of little cool tricks today.

I'm going to go check out the time tracker thing I always learn so much.

Thank you.

Again the recording will be posted immediately afterwards in the office hours channels.

This is available.

See you guys next week same time, same place all right.

Ms

Public “Office Hours” (2019-12-04)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-12-04.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Let's get started.

Welcome to both of course, our presents.

December 4th 2019 my name is Eric Osterman I am and I'll be leading the right my screens going nuts.

Now be leading conversation.

I'm CEO and founder of cloud posse where DevOps accelerator we help startups own their infrastructure in record time by building it for you.

And then showing you the ropes.

For those of you new to the call the format of this call is very informal.

My goal is to get your questions answered.

Feel free to unseat yourself at any time if you want to jump, jump in and participate.

We go to these calls every week will automatically post a video of this recording in the office hours channel as well as follow up email.

So you can share it with your team.

You want to share something in private.

Just ask.

And we can temporarily temporarily suspend the recording.

With that said, let's kick this off.

Here's the agenda for today.

First thing is Igor has joined.

He's part of the cloud posse team.

He's been with me for a very long time.

And he's going to share a little bit about the ThoughtWorks.

Technology radar.

This is really cool if you haven't seen it before.

It's a great way to stay on top of what's happening in our industry.

And then if we have some time, we'll go or reinvent announcements some relevant ones that are obvious.

Somebody is a microphone there.

Vincent I guess.

Vincent Yeah.

Good to meet you Vincent.

There we go.

Sorry about that.

All right.

And then let's get started.

So first thing I just want to see if anybody has any pressing questions that you need answered.

Any problems.

This can be related to cloud posse stuff cloud boss terrible modules terraforming general Kubernetes or just general architecture questions related DevOps and cloud.

Yes they'll hear it.

And I trust you to have a good question.

Cinder blocks.

How do you guys kind of do am on Ethiopia's how do we manage our subnet allocations will be another way.

OK Yeah.

Mute my notifications here.

I'm getting slack bond done.

All right.

So yeah.

Suddenly calculations.

We spent a fair bit of time dealing with this.

Now it would've been like a year or two.

But he came back up again earlier this year when we wanted to create a subnet architecture that spanned multiple eight US accounts and was to some degree future proof.

So that we could continue adding accounts.

It's a challenge because if you want to support peering between accounts, and BBC you need to think of that ahead of time.

So the best example, I have for that would be how we're actually doing it right now.

Woops let's get out of fullscreen mode here.

More efficient.

So under the cloud policy reference architectures.

We implement one strategy.

You've implemented the reference architecture.

So you undoubtedly interacted with it.

But you might not have just known it.

Let's see if I remember exactly where that cider so but Terraform provides a bunch of subnet calculation functions where interpretations.

And that's what we used to do.

To do this.

I'm taking I'm guessing because you're advanced at Terraform.

I'm guessing your question might be deeper than this.

What we basically did is we took a large.

I think slash 8.

And then we divided it evenly across the number of potential accounts that we would have.

And then gave each one of those accounts that CIDR block and then within that account, then we further subdivided for the species there in one of the things we did originally that we backtracked or backpedaled on related to subnets in pieces was we used that one BPC for backing services and one BPC for our Kubernetes clusters.

And then we would hear those.

And that would allow us to share that VPC for the banking services for multiple BPC is for different Kubernetes clusters.

But we decided against that because it just obliterated our Siders are available eyepiece because every time you divide by 2.

So we don't do that anymore.

Now we just run one large shared VPC with both banking services and Kubernetes clusters and related security groups.

Any more specific questions.

So I can tailor the answer better for you.

No one knows it yet.

So where do the split four counts is not based on architecture here.

So like for example without the request a block from operations center, or splits from that based on one side.

Yeah So different products may have different needs.

Wait a minute.

You need them art.

Yes because we're stored or data elsewhere.

So just really kind of planning your own it.

Are you guys managing subnet allocations then because you're basically you're saying it spans it spends what you can do an Amazon, it goes to other clouds goes your data center.

It goes to other places.

Yeah Yeah.

Yeah in this area.

I bet maybe somebody else might have some insights as well.

How do they mean.

How do you manage subnet allocations at scale for a larger organization that span multiple clouds.

That goes beyond kind of so Terraform is great from a practitioner perspective.

But how do you do it from like from a management perspective.

Your Craddock perspective, have you seen any software to manage these.

I guess a lot of companies just use ticketing systems.

I think it or hard like an Excel sheet.

Anything Yeah Yeah, I'd love to know if their software that actually does that.

No that doesn't go well.

Yeah, I can't get back to you on that.

Oh, yeah, they've been cool.

Yeah Yeah, I mean, I actually don't really know what's in it, but I mean, we run like nine data centers on our own.

And I know that they're sharing some nights between these data centers.

So Yeah, I think I like my wager is that they still use Excel spreadsheets.

But they're intuitive.

Maybe but I can ask.

All right.

Go, go.

All right.

You were not forgotten.

I just want to get through questions first, and then we'll fill it in with technology radar.

Sure take it.

Let's see.

Any other questions Terraform or IBS related.

I have a question.

Yeah, sure.

All right.

I work in a place where we used a golf cloud account and I'm trying to have a Kubernetes cluster.

But I've been having issues getting to the API because it's a private hosting zone.

Now I don't know what else to do.

It have to instantly be in a server internally.

So I could get my exile URLs to work or do you have an idea of what to do with that in the golf cloud account.

Man I wish I could help you.

We've done nothing with Governor cloud.

However, there's a handful of people in suite ops that do work with Governor cloud.

One of them is the usual is a regular here.

But he's not on the call right now.

So I'm guessing he's that reinvents or something.

So you know I've used gov.

Bobby OK fire away.

So my assumption here is that you're trying to get round 53 to work within gov. cloud.

Yes Yeah.

So within the partitioned region of gov. cloud there is no about 53 service because it wouldn't make sense to have a public partition that is actually private because everything and need to be as Governor cloud as private.

So I mean, there's two notions of zones in rap 53 public and private.

And this would be like a third's you know state.

So they just don't offer it.

So you need a provider for your root account or a different sub account from your public a to bs account hierarchy the organization, your obvious Governor cloud account is parenthood by a public.

Yes or right.

Yes Yes.

So it's within that tree in on the public side that you need to allocate an account.

And then create route 53 resources that match up with your database Governor cloud resources.

Interesting one other question just because I've seen it mentioned by others in the committee are you working specifically with cops by any chance or are you using guess.

Yes cops because I said that I really won't go.

Exactly So yeah.

So do we eat with what do you call it with us.

There is a mode where it uses gossip to discover the nodes and are using gossip mode.

Yes And it's still not working for you.

OK So it creates the cluster.

But I can't get to the mast and no with the DNS provisioning internally.

I still can get to the master.

No What about with just the raw IP address of the master node that you would see through the console or otherwise.

So still it will resolve that I could get to the cluster with the IP address, but generally, we would need a qualified domain name within the cluster.

So yeah, I'm not going to be the best answer that one as he cares or whatever is being used there try to call out the rule 33 each year.

It's actually not really.

I really wonder if I misspoke.

I meant the cops or whatever.

Yes, you know with the cops command you could just use you can attest to DNS zone, which is probably internal in it because it's gusset the base.

I'm just going to use the chaos and chaos.

The locale.

But even with that.

If I tried to log in from a bastion I still can get to it.

I can who is private that would know I'll let me see if I can find somebody on Sui ops that would know the best chances.

Yeah Let's see here, we have a copy channel.

I think we have some I haven't tried to do what you're asking to do.

But this.

I did notice that the distinction between route 53 not being an in region resource definitely made using Terraform harder exactly.

Yeah, I tried it with my Camacho lobbyist count.

It was easy.

It was like a breeze.

Yeah Yeah.

In general, a lot of tools don't support gov. cloud as you've probably figured out right.

It's a region that isn't normally like part of people's suite of continuous integration tests.

So it never gets tested and never gets support that will allow you to beat your unity in the crops channels.

Is the bastion point to point to the right DNS yes and you can look it up manually.

The DNS.

Yeah So lately being is actually is within private also zone.

So any other machine within the VPC in that coastal zone can resolve.

So maybe the answer is you only can use VPN and your cops cluster.

If I am less of it.

I'm not going to use cops do you.

Do you have any idea ideas.

If I don't want to use comms for I believe gravitation journal has AI mean, there are other commercial distributions of Kubernetes.

Gravitational has one that is air gapped.

And I believe works with I think 3s also has air gapped install Yeah.

Gravity the gravitational they just raised a bunch of money to another 30 moons.

That's the business to be in.

Sorry I can't help more.

Well, it raises 25 million.

And I'm not sure because I didn't have experience with governmental zone.

But probably teleport is what you need to access from outside.

Does a cluster.

Well, that's an I think that's a optimization you can't be connected period right now internally using the DNS.

I believe when I use gossip mode with cops it was public cloud and it wasn't this.

And I was able just to use the IP at its all right.

Any So if I hear anything, I'll let you know.

I also reached out in the general channel to see if anybody can help me.

I'll be on the lookout for Google and do post back to you.

I figured it out in the end.

So we keep the knowledge helps.

OK Thank you.

Any other questions.

All right.

Well, you were when you.

Well, let me do a quick intro.

So as we were sharing here we're going to introduce the ThoughtWorks technology radar.

If you haven't seen that already.

Check that out.

That's my tab.

It's taking forever to load here.

Yeah, so if it's top share.

I can continue.

Yeah, right.

So the thought we're technology radar has been going on.

Well ego is going to give an introduction to that period is on the cloud posse team.

And let's treat this as a way to open up the conversation about cool things that they.

Yeah So.

Hello, guys.

A link that Eric provided on his slides is outdated because a few weeks ago, a new technology out there was released.

It's evolving 21 issue.

So I guess you heard about technology radar.

But anyway, I will do a shortly introduction to everyone on the same page.

So the culture 80 is area apart.

It is regulatory.

So from SolidWorks is a company that do software development consulting and.

So they made a report about their opinion where technology goes and what is interesting what.

They have a good experience with what they are looking for and what they suggest to stop using and do it.

And a cool thing why I like it.

And follow is that they are doing this report for 10 years.

And we can look into the past and see what ideas and insights became a mainstream and what was the mistake they doing so report includes a wide spread of technologies.

And today, I will run through on the things that related to drops.

So escape our and light it from 10 machine learning and et cetera.

The real.

We is eager to allow your problems make a list of things we look at.

No, I agree.

You're right.

I was cutting out everything OK.

Is it better.

So for one moment I can last 20 seconds.

OK OK.

So assuming which I like about this report is that when I read in it.

I found that there is a sense, we also do in cloud.

And this is like the whole report is, at least Kay notes to discuss locate and research.

So I hope that you will share your thoughts about different points.

OK so a report can assist f quarters techniques, tools platforms and language and frameworks and today we will quickly run through techniques and feel free to look in nasr quarters. to see if there is some tools you or like articles small notes if you have experience ways or you have any minds and et cetera and, you are welcome to share.

It will be very interesting.

So if you will go to technique.

So rather consist of focus groups in nature.

It is so adopt is a list of things they have a good experience with and suggest to adopting most of the project.

So trial is a list of techniques and tools platforms et cetera that have a good experience ways, but they're still no about the disadvantages and it doesn't fit for most projects.

So assets is as in they are looking for SSSS s that looks promising and hold is a section of a where they provide the list, you should stop doing.

Because if it doesn't looks, as a best practice. so go in to technology.

Section And what what. we use and do in cloud was different way.

So the first one is container security scanning. so really it's now a positive Docker Hub.

And Amazon Container Registry.

And Eric just mentioned before, the call about Aurora.

Right OK.

So kwe has been open source that was harder as it.

This is now ubiquitous in container registries.

So yeah.

So that's is it to adopt and start to use in India everyday projects.

This is it.

So this is a technocrat Dockery which can to find a no one security issues.

And if it's safe say make a mark and you can say so.

Another thing is pipelines for infrastructure as a code we so we have.

So we don't apply changes infrastructure needs.

So I see the, tools like Travis, called Fresh Jenkins et cetera.

We have Atlantis which is two that, follow like around tasks on Paul request opened on to her.

You should have but tasks that perform an actual changes are described as a code and stored in our repository.

So from that point of view, it follows.

This is this pattern for at least a year.

I guess.

And, we can, and the mindset is this is a good practice.

And it is very useful.

And Atlantis is a good tool for such purpose of tasks probably better than most sites today because it gives you an ability to check what changes he will apply to sex.

Nicole and another thing, which is interesting.

And we are trying to adopt it is running costs.

There's a, architecture a fitness function.

So an idea is that, you should monitor the cost that the whole system and different some system goes to the level of port how much it costs you.

What they add to this is just that it didn't stand out to me until right now and I'm reading at the time.

So we've been using cute cost, which is open source open, core kind of communities cost tool behind the scenes and uses Prometheus.

It also works in California.

What's interesting here is what this is pointing out is that you can observe the cost of running services against the value you deliver.

So this gets kind of interesting where if you in Prometheus you have accessibility to bottom line numbers and your business order is sold or you know sign UPS or things like that.

This gets interesting because you can now back that out into what it cost to operate it.

And I think that's what the fitness function here is referring to.

Yeah but so so is this metric consists of two parts a cost of it cost metric and a value metric.

So how to collect fair value metric is business specific with all sorts of previous is just a database like synapse.

So basically, there needs to be Etl or real time you know it basically Prometheus exporter that that ingests that data from whatever source you have.

And then you can have, then you can truly achieve what they're describing here.

The question for me is how to calculate a value of a bitch sir at which point for example.

So cube cost can calculate he can show you how much each service you are running on carbonates for example, or plot cost you another tools that provides such type of information is spotting it, which is like a service, not yet the sales that you can cost, but it do a great trip once it does, but it's hard, but with spot is you couldn't factor in your own your own metrics as part of that, it will show you how much your namespace is in COGS costs.

So in this case, let's say, for example, for a period of time you sold a million.

And for that same period of time your infrastructure cost you know $100,000 operate.

Now you know that you have a 10x return on for every dollar that you're spending on infrastructure is it really an interesting metric in terms of what companies spend on marketing it for example.

I mean, I mean.

OK I don't know.

I mean, for us, I would not care about those architecture across metric.

I know I pays this much a month does is how much your customers.

I get each month month.

But yeah if I look at our customer acquisition costs that we calculate based on our marketing.

I think.

OK Each customer costs us roughly like of euros in marketing.

Yeah So yeah.

So ways it can be interesting.

And we have a Sas product basically, what is your offer.

What is your opex per customer.

And where that gets more interesting is perhaps their services that you're operating that have a very high optics.

But low but relative to the value that you're providing, for example.

So I don't I think it's up to the business ultimately decide.

I think one of the things that's been frustrating for me in this position with infrastructure is that it's always seen as a cost center right.

It's like where money's going.

But you're not showing value.

But you need.

We need we need to get better about showing the value that we're providing as well.

And tying that back out to metrics that the business uses.

So what are those.

I don't know.

Right Yeah points up here.

Let's what are some other interesting ones.

And those are things that we can talk about is a design system and days to provide a collection of design patterns of different components libraries and et cetera.

How you.

So this pops a future development et cetera and it slopes where familiar to what can we do with reference architecture.

That today, Eric showed on answering the question about subnets and end to what we do with these Terraform models.

So this is true for models is a collection of components.

You can like that interact with each other as it may go to make a really does anyone have something similar things that companies like documentation example, I don't know.

Well, let's go present.

So another interesting thing is that this is a binder at the station.

This is an instrument.

They provide a list of them, such as in total.

And Docker notary.

That makes you to make a encryption verification of binary images as ads that authorized for deployment and integrity clear et cetera.

So we had an experience with set yet in a bad way.

So we had a single storage for artifacts of photography images and that gives us guarantee we're on binary.

The same images across all accounts.

And if so, I said the checks and images.

Then it is.

It is like approved.

So by now at the station.

Looks interesting.

In addition to a to z for flow.

Is anybody practicing this here.

Not yet.

I'd love to basically.

I wish you know who I am.

Yeah, I honestly, I I've been trying to figure out how to write up my story without getting into trouble because it would be hugely embarrassing for a number of companies.

But yeah.

Honestly, the fedora people do this door.

Does this a lot of the package.

A lot of the official package registries do that.

But at the Doctor level deploying you're deploying cryptographic sign Docker images is one of the things Kind of here.

We know we should be doing it, but I don't really know anybody who is doing it.

Londo is a londo does it work with there or at least they do it for their Cooper daddies operator.

Like there they have some really wonderful Python framework.

Again, I'm kind of a Python guy.

I don't really know much about go.

But they have that baked in.

In fact, even just to contribute patches you have to have to register your p with an OH Yeah.

I mean, that's Yeah, that's the upside, though, right.

The get.

Well, the thing is that the operator is an image.

In other words.

It's the art.

You basically your artifacts become so yes.

Yes, it would be.

It would be on get upside right.

But they also sign the images they get built, because they publish those things out.

I mean, that's how that's how the code runs is as a Docker image.

Yeah Suzanne and Lynn do that to us.

Oh, yeah, sure.

A lot of sees.

Yeah, it's a funny the name of the repo is a little bit strange.

It's like one of their incubator.

Hang on a second.

I had it right here.

I put it in the kids in the office hours.

OK I sent you.

Well, I will check it later.

Yeah, that's interesting. really interesting.

So there is a dependency to do fitness function.

So an idea is to define as a metric.

How much dependency yourself, do I have.

And see if this is this metric measuring go up or down.

And then they control about complexity based on this function, we didn't have it.

But it's.

Yeah, looks interesting.

It should be easy to implement and crosswords and kind of give observer but observer abilities needs to be changed at which point using an outdated original component or co-sponsored with the drift fitness function is a technique, which is specific evolutionary architecture for its functions to track these dependencies over time work needed.

Is this basically being able to see at a glance how out of date all your stuff is like your Docker images your home releases your packages your stairs relative to all of this et cetera.

Or I don't know.

I think the goal is that when you add something into your self to is you can see how much the dependency adds.

So for example, you don't want to read the Python tools integer days because it's take too much dependence.

And when it's not only you, but you have a like a La ti.

Yes, I did.

That's a metric that gives you what is going on in the project.

If anyone adds something to have a Yeah.

So there's that side, which is kind of as your 8 increasing the surface area of the code you manage.

And then there's also then the drift of all the dependencies in that as they get out of the tech that piles up.

So what I was working against with multi fire like because I mean, it's the idea is to be as up to date as possible and to track that how up how up to date are you dependencies.

That's how I understand it.

Yeah, this is where helm did or was at the helm Def comes in.

What was the modifier of.

Yes Yes.

I'll notify our needs.

Now a net exporter.

Yeah Yeah.

So we can implement the drift fitness function for home.

That's it.

That is interesting, especially, like if he's got an upstream project that you don't know all that much about.

I mean, I was the I hung out with the Python sonic people.

It's a kind of an async web framework.

Lightweight and we're very nice guys.

Honestly, I was just there just really almost like you guys really just like my support group.

And I noticed that their package in FADARA was out of date and it wasn't building properly.

So I kind of like tried to.

I haven't finished yet.

But I've been updating it.

And they're all these weird changes going on in Python with Python 3 and phasing out a lot of Python 2 stuff.

And then all these weird web sockets things.

And it was like, oh, this could get really gnarly and Fedora like they want to use all the latest stuff.

But they still want to support the new things.

And it's like, oh my god, I think the Python 3 Python 2 thing would have exploded any function.

Yeah Yeah.

Well, it was interesting there is also like then what Perl has done with Perl 6.

Now They just renamed or I believe they at least voted to rename and I forget the name they've chosen.

But like Perl 6 was like a totally different language.

Let's not pretend that there's an upgrade path between Perl 5 and 6.

So they just they cut bait and created a new language that's like something for like 15 years.

I was exactly like the butt of a joke.

When I was in college two year in 1999 or whatever I remember talk of Perl 6 back then.

Bringing back memories, man.

OK So are you being serious.

Were Yeah.

In this section, there is it too.

Two points are related to each other and where we can go further.

And there is an interest in it.

So this is a security policy as a code and sidecar is for endpoint security.

So yeah.

Well, when we talk about secrets a policy is a codon our everyday practice.

We usually talk about IAM policies that gives it permission to some applications where we had grenades interact with Amazon and get taxes somewhere and plus a security submits and all that stuff said that manage security.

But here I found it seems that looks interesting to where we can read we can do more with this.

So this is an open policy agent, which is a tool that gives your ability to define different security policies as a code and it integrates with a lot of plot platforms and mesh services like is still on second.

Here is that this correction.

So it supports and why.

Kafka I don't know what that says.

This is the company.

And it looks like an incubator program.

Crowd native foundation.

So it seriously looks promising and really there is a two left to.

Like two fields where.

A lot of new tools. is this is a security end and cost control in a cloud.

So this one, if I have an instrument.

You should look at.

And I related to it is a side.

Carson as at end point security.

So when you use a public cloud and, you you're like you're on services across outside the bomb cloud provider then, you can use the comma sidecar as an endpoint security. and policy agent would gives us stand out.

How you can define a policy across different clouds and environments.

So has anyone experience with something similar.

Oh, how you solve a problem with security cross when you use a public cloud or something like that.

Because we I'm on Amazon on me and me.

They didn't are on public cloud yet.

You see all these demos for the service measures and to show how easy it is then to do the cross cloud networking using the match.

It's all on the flight path.

This is looking up apparently aside cars are now a first class citizen.

I think one of the ideas is it one that Senator Yeah you.

So I know, in the open ship scene there's a lot of discussion around.

Well, I should.

So Argo I guess has is getting some traction among open ship users and the first time I heard about this stuff was just a few days ago where people were asking whether they could use a p.a. to federate.

The author is the authentic the authorization for Argo because the thing about Argo is it's got like its own.

You have to off into Argo and you also have to offer into shift.

And so it's a little bit and an hour goes like really, really not.

It uses decks.

So it's very simple to integrate like Google OAuth or get lap or GitHub super, super simple, but it's still like it's own thing.

So it would be nice to have it somehow federate so that would probably be like maybe in those like CCD apps where you're running like nested administrative domains or maybe that would be a place to do that where you could run.

You could say you know you only have to.

Off into like the parent container.

So to speak, or the parent framework, a little bit like what AWS tries to do with cognate ho me.

So I still kind of what you're saying.

There's a need for something in Kubernetes land which makes it easier to standardize authentication across apps.

Yes and make it all work.

And I agree.

And we've spent a lot on this.

We personally use key quote, but I mean, it's just like everything's just a hack on top of the hack on top of a hack.

And then we keep low.

We use gatekeepers and we deploy gatekeepers for every single service we want to expose.

But even though we do that, all we're doing is providing authenticated way to access the app of which very few apps actually then handle.

Yeah, fine grained access controls the Cuban dashboard is the exception to that where we can actually pass through the roll, and then the grenade dashboard honors that.

But yeah which is more standardized what.

This is the challenge, though, with open source right.

Integrating dozens of technologies, from Def dozens of different vendors of some use Kubernetes and some use something else.

And I'm not holding my breath.

Well, this is what Opie is trying to achieve.

Right I mean, look, I think technically what they are trying to solve and they're doing a great job.

I mean, I know is up to German bond or Dutch bond is implementing their this gig compliance c stuff, which is now and they're pretty successful with it and lots of it's getting a lot of traction in Berlin right now.

That's from what I see.

So OK.

So it works.

It on boys so that I can add on boys a proxy.

So I see that connection point there.

Then let me just put a link in the home office Horace.

OK, cool it degrades with a lot of things.

I mean, I open it at least.

All right.

Well, then that is encouraging.

Yeah Yeah.

So that's all from this section.

If you like this far.

One guys.

I would be happy to continue.

Next problem is to set platforms especially we have experience with some tools and platforms that mentioned set up, for example teleport dependability et cetera and.

And if you choose something that you use.

I've tried.

We can and we can discuss it.

So that is center a much.

And your thank you.

Yeah, thank you for bringing that up.

Well, I think we'll be doing is we'll be pulling some of these topics out and adding them as key points for future office hours as well.

So any other questions, guys be totally unrelated meeting a talking point on things as well.

Reinvent government.

Yeah Yeah Yeah.

Cool Yeah, totally totally.

It's the elephant in the room right now.

Is it obvious reinvent.

So what are some cool announcements there.

Out of the 150 announcements or whatever that they're making.

I feel like it's almost abusive at this point, the number of announcements that they drop on us in one week.

Yeah, I tend to just wait until it's done.

And then start watching them on YouTube.

So I can actually.

I mean, there's lots of means meme going on Twitter right now.

Like that.

Really Yeah.

We could go racing or we need new lines.

We just put like a coat check that this bill line is code per line of code.

It's all like, oh, yeah we'll need new lines.

We just put it all on one line and don't pay anything for the servers.

So related to reinvent but I got I chatted with me of US rep.

Yesterday about the new savings plan stuff.

If you haven't had a chance to look at that.

It's well worth your time.

Basically, it takes the best part of convertible reserve instances and ends rather than having to commit to certain family in numbers of instances in normalized units and do all those calculations.

Especially if you're rapidly changing between the six types and scaling what.

That can be really hard to figure out.

You just commit to $1 per hour amount and it applies across all your compute and cross region, which our eyes don't do.

Our eyes are always region specific.

You can never convert though.

So it's actually pretty cool if you.

It's g.a.

Now as far as I know, they announce beginning in November sometime.

So definitely I would highly recommend looking at that.

Talk to your accountant make sure it makes for your use case.

But I do not see a use case where it's worse.

I haven't yet seen you use case where it's worse to go with the new savings plans for your e to compute rather than reserved instances.

Yeah So just keep in mind there.

When those start expiring.

Yeah, that's interesting also.

Yeah, exactly.

When if your eyes are expiring this is probably the way to go after that.

And there is the savings of that relative to using spot for example.

And how big the delta is between.

All right pricing and spot instances.

So the savings plan pricing for me.

I was looking at partial upfront three year dollar commitment amounts.

And it was roughly it was right.

It was just about 50% off your high demand cost.

Nice So pretty nice.

The obviously, it's pretty comparable to our eyes.

Some places I saw was more percentage of the on demand and it was never below that I saw.

So as I said, I don't see an instance where it's worse to use to gain the flexibility of the savings plan stuff.

They only have four easy to compute, you can't use our eyes for art.

Yes, they have a lot harder job tackling RDX because they have to deal with software licensing for SQL and Oracle and all that crap.

And then they also have they don't have it for like a to cash instances yet either like they have our eyes for it.

So those you still have to go through our eyes.

But the savings plan.

Now I hope they announce something it RDS or reinvent or they're just like, yeah.

We're going to expand this to RDS and the massive cash this year because I'd rather just say, yeah, I'm going to spend 30 grand a month on my compute m 20 grand.

RDX please give me a discount on that guaranteed spend and go from there.

Yeah Cool.

Yeah, they bring that up.

I forgot to bring up the thing about savings plans.

That's in pretty good cost optimization.

Trick that they introduced just jumping back and.

I'm pretty sure we'll be talking more about reinvent next week, since we're already up on at the end of the hour here.

Any other big announcements obviously casts far dating a big one.

And then the week before last just previously last week, whatever was the fully managed node pulls for you casts both have Terraform support already, which is cool, because that means that Amazon is working directly with Tashfeen corp to get this stuff ready for all announcements to fill in.

Thanks so much.

Good question.

I don't know if that was a joke and fuku and CloudFormation.

But yeah sometimes even CloudFormation lags behind right.

So be interesting terraforming usually nice sometimes.

Usually that is interesting.

Yeah, let me know, if any of you find that out posted back in office hours it'd be interesting to find out.

I don't know if he or.

Actually, this is if any of you guys want to monkey around with like cut some of that cost stuff.

I have certainly for the next couple of months.

I have pretty much unsupervised access to a bunch of demo servers and resources in similar logic.

So we can do all sorts of aggregation of logs and metrics and bye.

Thanks for.

Thanks for extending the offer blades.

You guys can hit plays up on the officers too.

Yeah, just exactly.

Yeah All right, well, then we.

That brings us to the end of the hour.

And that wraps things up.

Thanks, everyone for sharing ego, especially for taking time to prepare the notes on the technology radar.

I always learn so much from these calls and a recording of this call is going to be posted in the office hours channel.

See you guys next week.

Same place same time, guys too.

Public “Office Hours” (2019-11-27)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-11-27.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Let's get the show started.

Welcome to Office hours.

It's Thursday or November no Wednesday November 28, 2019 my name is Eric Osterman and I'll be leading the conversation.

I'm the CEO and founder of cloud coffee where a DevOps accelerator we help startups under infrastructure in record time by building it for you and then showing you the ropes.

For those of you who are new to the call the format of this call is very informal.

My goal is to get your questions answered.

Feel free to unleash yourself at any time if you want to jump in and participate.

Will we host these calls.

Every week we automatically post a video recording of the session to the office hours channel as well as follow up with an email.

So you can share with your team.

If you want to share something private just ask.

And we can temporarily temporarily suspend the recording.

With that said, let's kick this off.

I have a few talking points that came up this week that I'd like to bring to everyone's attention.

But we'll also get to answering your questions too.

So if there's ever a lull in the cold conversation we'll get into the new health file provider for Terraform but that moo moo Shu came up with.

And by reviewing his comments on that.

I discovered the Terraform shell provider, which is pretty rad.

It's the ultimate escape hatch to plugging into terraforming without being a Terraform developer like low level developer and always can wrap things up, get up actions.

If there's nothing else to cover.

All right.

I was just talking with Zach.

Zach is a new member of the community.

He's been helping submit a bunch of pictures on our packages.

Thank you for those.

And he was just sharing some of the stuff he was doing on some big data pipeline.

Stuff like that.

Zach hey.

Thank you.

It's a bunch of PR what a couple.

Sorry for the back and forth.

We've been struggling with that.

The packages repo is a little bit of a testbed for CCD for us in the testing I'll get action switching downloadable code credit and a bunch of other little experiments and stuff.

So we.

Yeah, we've had some issues with stability on the pipelines and the packages refilling impacted your ability to contribute there.

I did you a push to fix the latest thing you were working on right.

With the tools I did.

I did that.

Hopefully it will work.

Now OK.

Well, I haven't.

I mean, actually, I need to run the test match to see if that works.

Now Hey Dale welcome Maddie.

Motion cool.

So let's see.

So are any of you using help file today.

Yeah All right.

Cool cool.

And then I take you guys are also using Terraform right.

So then this is kind of exciting.

I don't know if you saw the news.

But I've been I've been bugging membership for like the better part of last year that we need to Terraform provider.

So that we can integrate it with the full cycle of everything else going on.

And I'm pretty excited that you came out with this.

The benefit of using this is that you can now pass values like this and have interpretations or reference outputs of other modules.

So it will streamline your internal automation of bringing up a cluster from scratch that depends on integration touch points provisioned by other Terraform modules.

What's available.

What was that.

This is available now.

Yeah does this available now.

He's not.

I just opened up an issue here.

Published binary.

He's pretty good about that.

He'll he'll probably do that in the next day or so.

But right now, there's no published binary.

So you gotta go get it yourself and build and install that.

And since it's not an official provider you know you need to install them in the plugins directory like this.

That said, I'm thinking of distributing a package for it in our alpine repository.

And then I believe I haven't tried this actually, I believe we can then set this environment variable.

So then you can have a shared path on the file system where you have all those plugins if anyone can correct me on this.

I'll be great.

If that's not the case.

All right.

I have a chance to test out the next few weeks.

I've been able to tie-in my Terraform with the home piles that I've been set up for my clients.

So yeah Yeah.

So he is well we're testing it this week Andre on my team is working on that right now.

We're going to be using it together with temporary cloud.

Well, one of the things he does is he credits.

This provider.

Dude I've been looking for one like this.

There's a bunch of them out there.

There's like Terraform provider external or something by someone else.

But it hasn't been updated in like two years.

So here's some here's a provider that is being maintained it commits as of just a few days ago here.

So that's cool.

And if you look at what this provider does it exposes the underlying lifecycle hooks in Terraform so create read all the CRUD create, read, update, and delete.

And that's cool because now you can just tie that it like if you wanted to.

Now use Terraform around the cops provider look the cops provided the cops are quite cool.

You can now do that just by scripting that here.

Sounds like something maybe you'd be interested in Dale I don't know.

Certainly Pablo welcome.

So I submitted a PR I think a week or two ago in the user end of it.

I am.

Go shoot.

OK, let me see here.

So take our thoughts.

I'm trying to leverage the tag in the attributes to do a mock UPS to play through all the scenarios I saw in the documentation where teams were actually tagged like members of the team to ratchet tag the resources where all the time, and you could actually interact with the resources that your tag for.

So like you can sort of step like this like, oh, I rolled by brightening type scenario with resources, which is called a bar.

Yeah, no well I'm familiar with a not conceptually, but yes yes.

So I was watching all of the talks from I think it was an old event, and they were walking through using attribute based arms control control access and the channel you walk through with a scenario where different teams were tied like a zombie and unicorn and what resources that they could actually terminate or actually interact with as well.

So you wouldn't have team members that truly bring those tools with resources, that kind of thing and all that.

Did you say you had a PR for that or.

Yeah, I think it's awesome PR.

Well, it wouldn't be coming from me directly would come from the lunar ops.

Gotcha I thought you said it.

I am a user, I think it was.

Yeah Yeah, I didn't see the open PR there.

If so I'll have or unless we maybe merger.

I'll take it again.

Oh, yeah.

This was already merged.

OK Yeah.

Thank you.

Thank you for that.

Andre reviewed it and used the go to guy on my team for Terraform stuff.

So the fact that he didn't tear it up.

That's a good sign.

Perfect Yeah.

Let's hear a release tag.

Yeah Any cutter releases zeroed out to the zero period has your changes.

Thank you.

Thank you.

Thank you.

Cool Looks like Pablo dropped off Nadia are you just ordering today.

Any questions.

Yeah, no questions.

Just following along.

I don't.

We're not using Cuba that is right now we're using efca.

Or just kind of playing around with it for the most part.

OK though with Terraform.

Yeah, we're using or I say I'm using your guide modules.

Oh, awesome.

Good to know.

Good to know.

Yeah using the we've updated them all these yes modules.

Now support each CO2 for Terraform zeroed out as well.

Are you on the latest.

I have.

I have not integrated your changes in yet.

OK I actually upgrade them on my own.

Like I don't know a month or two ago.

Yeah, it's not used in production or anything right now at all.

I'll update yours again.

Before long we've had a terror test too.

So they're all tested every commit.

It's also Yeah, you guys are curious about terror tests and haven't really dabbled with it.

It is pretty sweet.

I've been pushing it off for a while to be honest.

And I want to just use bats or something simple like that.

But with terror test it does.

It does work really well.

So we have a source directory.

The test slash source is where we put the example for Terry test.

We named are our convention is we name the file the test name.

After the example, if you go under our examples here.

And then complete here's the one that we're testing and then fixtures here's what we test our modules with.

So what's the thing that we do.

That's nice is we distribute and make file here, which makes it easy to use.

Go in the current working directory without having setup the whole scaffolding of like the directory structure for go.

We just hack it together with some simple links here.

So you can just run make test and it just works because it's done on the c.I. or like a pre-commitment.

We don't use many pre commit hooks right now mostly because like I've been like they're unenforceable unless you actually add them to the CIA.

But I I'm after Andrew's demo last week.

I'm more curious about it.

Specifically I want to add a pre commit GitHub action to a lot of our projects.

And then add that there.

So yeah, it's in the backlog.

I do want to do that.

But no not right now.

So yeah period for us.

We run it right now.

And a code fresh pipeline test that gamble.

Basically Yeah, we just we call on the repo.

We initialize the build harness and our test harness.

These are built harnesses what we use everywhere.

We just standardize how we interact with our projects.

And then there's.

And then here we run the tests in parallel, we.

We do the lifting the.

We run the that that's automation tests.

Oh no here here are the bats automation test that is short for the bash.

Automated testing system.

I think so we have a bunch of stock tests that we distribute in our test harness for that.

And then we have the terror test step right here where we just call make seat for this test source directory nothing to look at.

But yeah and I think you think or what was that specific do anything specific to them apart from a test or was it.

Tony Jeremy.

Yeah, I didn't even open it up.

Let me.

So we're not doing like variable.

So look, there's like perennial principal 80-20 right.

You can span.

You can get the maximum amount of benefit by it maybe doing 80% of the work.

But to get that the last 20% will take 80% more effort.

So we do we do the minimum in our experience, just bringing up a module and bringing it down again catches 99 percent of all the issues you're going to have testing the functionality of what you've done.

Yes that's valuable.

But it's going to take you 80% or effort to implement.

So in a lot of our tests we're doing kind of the minimum effort where we instantiate the module with its example that we publish using our fixtures and then we and then we destroy it afterwards.

We evaluate the outputs of that module to ensure that they match what we expected.

But we're not doing really elaborate testing of functionality.

The exception to that is our CAS module our UK JMS module, we're doing a lot more on that.

Andre spent some more time to ensure that we wait until loops until the nodes join, and that everything is health healthy.

What's so cool about terror tests being in go.

As you can leverage the whole ecosystem of go modules for that.

So Andre implemented a event Handler here to intercept when the notes join.

So one of the most common problems.

People were reporting to us were they were having problems with nodes.

We're not joining the cluster.

So by implementing this test.

Here we ensure that no, no change breaks that ability for notes to join the cluster.

And we're just using the built in support here for not built in.

But we're using them.

I forget that it was Andre wrote.

I'm just riffing here.

Yeah, so he's using all the k the Kubernetes go libraries to make that interaction easier.

That's my dog.

Mean his spoken.

Yeah, this is if this if the turtle stuff is interesting.

Let me know not in the office hours channel and I can have Andre prepare something for future session that we just.

Yes Yes it's cool.

All right.

I'll make a note of that.

So with things like terror tests like where do you draw the line.

Let me on my dog's bone.

I have to get my dog's bone.

It's driving me crazy.

Once it is there early you must be going through it just right back.

Where do you draw the line on.

I'm testing whether or not like basically the provider and Terraform works vs. whether my business logic works like I find that testing infrastructure.

A lot of times you just retesting stuff that should already be upstream testing.

Yeah, exactly.

You don't want to test.

You don't want to like all the Terraform providers had extensive testing.

So you don't need to test that kind of stuff.

So it's more.

That's why I think testing the value about puts is very valuable.

And then going back to the parade of principle on applied to testing right.

If you're going to spend 80% more effort trying to test for it is it worth it for you right now to do that 199 percent of the problems are caught just by bringing something up and down.

That's how I draw the line.

Basically, if we're going to have to spend three days developing a test for this, then we're just not right now because we got too much.

So like business logic.

Let's say Yeah.

I mean, it's a difficult decision like let's say you have some business logic to deploy in $3 with a citizen and you have some cause rules that you need there.

You could probably test that free easily.

And that the course rules are effective for you by just simply provisioning that module.

And then initiating and HTTP get against that a resource and checking that the headers are good.

And that would not be a big project that would be just like, an hour's worth of effort.

Yeah, it makes sense as just drawing those lines as often, especially I've noticed for you know more junior developers who are just getting into integration testing.

I see that struggle with that a lot.

Yeah, you end up with a lot of tests that just duplicate upstream effort or don't provide any value for the time spent on it.

Yeah, I. Yeah, I can't say anything that doesn't sound just kind of obvious to you or I.

The test is the test when I say test here.

The thought process is really test what is absolutely unique to what you're implementing.

But not that is unique to that just the underlying primitive or that resource.

So the fact that you need a cause record for a certain kind of request that's unique to your business that that's bona fide for testing the fact that the bucket is created to that you don't need to test the bucket was created.

Bucket was created of Terraform says it was created.

Exactly where this proved incredibly useful was when we first started porting our modules to 0 to 12.

We've been I think we got, like we got like 50 60% of our modules, and all the most important modules are now zero about 12.

It's just like some little remnants here there that are not.

But when we ported our no label module over there were just it was a pretty complicated module in terms of all the replacements and coalescing and formatting and stuff that we were doing with it.

So having tests on the outputs for that to make sure that we didn't introduce a regression was critical and it actually helped me catch a bunch of things along the way.

So here's an example.

We're just testing that the module is producing the kinds of outputs that we would have expected, given the kinds of inputs that we're giving it.

And those inputs are in the fixtures cool.

Any other interesting announcements you guys have seen.

Cool projects lately.

Bookmarked Yeah.

We'll make a run for it.

I don't really want to hear more.

But was it real.

I. My rancher.

Yeah, it's like I'd get up.

Solution is not recommended for production just yet, but it looks fascinating.

Micro pass.

I didn't miss this.

I didn't see this.

Just keep clicking in the wrong places.

Let's see.

So Rio is a micro past that can be layered on top of any standard copper needs cluster consisting of a few custom resources and apply to enhance the user experience.

Users can easily deploy services to communities and automatically get continuous delivery.

DNS HP is routing monitoring auto scan canary deployments get triggered build.

Oh, very cool.

So if this is a pretty minimally deployed like my problem with a lot of these passes is you've got to deploy so much infrastructure just to support the pass.

It doesn't make me excited.

So I'm wondering how many I'd love to see an architecture diagram for this.

OK So it's built on Linkerd.

Prometheus tacked on.

Let's encrypt.

OK That's very cool.

Yeah, I do want to be a fun little PSC too.

I get I'm just going to share this to the officers general.

So related to this.

It reminds me of something else.

I saw.

It's by we've plucked not flux flux has gotten a lot of attention lately because of the collaboration now between Argo and flux and is it whose backing it is.

Red hats.

That's backing it or I'm Irish where there's somebody spearheading this or is my understanding.

But no it works Flagler.

Have you guys seen this.

Whoops Yeah, I clicked on the wrong one.

So everyone talks about doing cannery rollouts and blue green rollouts.

But when you get to the science fine doing it.

There's a little bit more to it.

And the right way to do it, of course, is a staggered rollout which is integrated into your monitoring system.

So it can automatically progress based on your monitoring solution or your metrics.

And that's basically what Flagler is doing.

So it's a is it a controller for basically that process.

So as you see it ties into the Cuban data APIs to see the progress of your rollouts Prometheus for your metrics and then it can progressively increase the traffic to your services.

And that's cooler this year as well.

I was looking at this one as well.

Got a bunch of things that is interesting is if you do actually kick kick the tires on it or do it being a serious thing do let me know.

Be pretty cool.

Also Also this goes for anyone here.

If you guys are working on anything neat.

And you want to do like a show and tell them we're really cool.

So Flagler and Rio All right.

It sure flag officers here.

Have you guys heard of a man.

Yes Yes.

Yeah, I don't know how realistic that is.

As far as using it in a W because it's supposed to be almost the drop in replacement for Docker.

Yeah, I mean.

OK, I guess it really depends on what perspective you approach this stuff from.

Man who comes up time and time again, when you're looking at.

How How do you manage CI/CD providers allow building a Docker containers without giving away the crown jewels.

So basically, they wrapped around pod man for doing those Docker builds in a secure way.

But Yeah, I haven't tackled it.

Not my kind of the level of say, there are all these micro optimizations just like this that are really awesome.

And I think are maybe worth pursuing, especially if it's just like for your closed ecosystem within your company.

But we have club costs of what we're really trying to do is provide the roadmap for integration of all of these components and systems.

And as much as possible.

We wanted to first thought leadership to tools and technologies that we use.

So for example, for Kubernetes today we've been mostly using cops.

We do have the guest here for modules as well.

But I bring this up.

Because if cops decide they want to use pod man that's awesome.

We'll support it.

But if cops doesn't support it out of the box.

We're not going to kind of break compatibility with them just to support it.

And introduce just more and more stuff to manage our earth.

Are you guys doing a lot of prayer.

Are any of you running highly threaded apps on cougar niches right now.

And using Prometheus.

Yeah not yet.

I actually do want to reintroduce using Prometheus within or augers or kind of to cleanup or deployment or natural deployment pipeline as well.

And do actual cooking area with it.

But nothing yet.

There are a few products coming up for next year that may actually require that, but not at present.

OK Yeah.

What do you ask.

So then there's recently to fix that has been merged into the alpha channel of coral as it relates to basically how you throttling is implemented in the Linux kernel and a big problem in how you can monitor for it on under Kubernetes, and the problem, is that you get a lot of basically erroneous CPU throttling alerts.

If you're right running highly threaded apps under communities and just spreading the word on this fix.

That's the core of us alpha channel right now article unfortunately, it doesn't look like it's going to hit cops like or Debian cops for Jadeveon anytime soon.

Welcome to Ms Wilson garden I'm always alert.

What was it.

I have a request.

All right next year.

I'm hoping you actually do another session again on geodesic and your actual workflow for a bit more of, I guess, a debate.

I like what you actually showed us.

The last time I took my effort to perform it greatly.

I think that there's a lot more that you guys actually do, which I think is really cool process that we just want.

I want to peek behind the curtain.

Pretty much.

Yeah.

No I'd be right.

They'd be pretty cool.

Also something even like Alex might be also interested in showing his work clothes and stuff.

He's using geodesic weekly, if not daily workflows are efficient.

I'm sure you guys have a lot more tricks and tips.

But I'm happy to do an ad hoc thing.

Yeah also there's a great one.

I should get Jeremy on the line for that one.

Jeremy has taken it to the n-th degree.

What he does inside of geodesic stuff that stuff that I haven't even kick the tires on the journey he's been doing inside of geodesic.

Basically there's a bunch of extensions that he's added.

So you can add customizations for your experience and you'd like that don't necessarily affect other developers.

This is kind of empty and antithetical to our original plan, which was that everyone should have identity in identical environment.

But the reality was that different developers have different shortcuts different aliases different ways of working that I gave up trying to fight for the one.

The distinction there there's the tools and like that portion of the environment should all be the same.

Yeah, but if somebody uses them and somebody uses emacs like, who cares right.

I agree.

I think that's the distinction like that.

There's a line there where it doesn't matter anymore.

Yeah So actually this remind just jog my memory.

It's a total non sequitur here.

One of the things that we've been seeing a lot more of a lot more problems are with is base and you've been experiencing this Alex as well is basically fluently directly integrated with Elasticsearch an overwhelming Elasticsearch Elasticsearch is just very dainty sensitive finicky search product.

It doesn't like eating too fast or it will just complain.

So a couple interesting things there.

We haven't implemented this yet, but I was just doing some research and bringing this up because one of the announcements was just back in October.

So this is pretty new.

So I think the better architecture for us to implement on this is going to be one using the fluid fluid to Kansas plug-in and that that's existed for a while.

So using Kansas as a buffer until I ask search and or Jacqui rather I don't even have an Elasticsearch.

Eat Yeah.

Exactly controlling the controlling the ingestion.

So that's one thing.

But once it's in Genesis a can buffer it.

Like you said B you can send it to multiple destinations or sinks one of those sinks can be s three.

So now you can have long term storage 3, which can be queried with what is it Athena or something.

And then you can also have your real time search with Elasticsearch, and a shorter retention period because you only need maybe the last week or something like that.

So this was the one click again here for it.

This is for fluid deeds can ease this.

But then can eases and it's already supported with Terraform is to shift the logs to Elasticsearch.

So an Elasticsearch destination.

So really does look easy enough right.

And then related to that it was just the announcement Yeah, I've used it on over the years and firehose outside of the thing outside of Kubernetes to deliver logs to plastic surgery.

It works fine.

I use it.

I used the last year from a bunch of Windows boxes using the ADA as agent logs agent works great.

That's cool.

Yeah, I mean, I think this could have been implemented outside of this.

Like you said I guess it's just they just made it easier, more turnkey now.

Yeah, I used to have to back when I was doing it last year used to have to munch it through a lambda.

You can see pieces for the lambda to munch the logs into whatever format you needed for a massive search.

You do all that that Make sense.

Cool Any other cool projects you guys came across recently.

I'm going to go head over to my own stars.

See what I found.

If there's anything interesting.

Alex anything.

Sorry I missed that I was responding in Slack.

What's a what was a question.

Oh, OK.

No, I come across any cool little projects.

I mean, I have a lot of cool little projects in my head.

I actually play with any of them.

Yeah Prometheus was as you know, was bought for a couple of weeks.

And I finally got it fixed up and found out there's a whole bunch of alerts and problems that have been going on.

This sub describes them through those as a little fragility there.

I've noticed with like Prometheus can still collect metrics even if it can't display them or you can't use them because that portion of the stack isn't working.

But that doesn't necessarily cause the watchdog that we set up an alert manager to go off because metrics are still coming in.

And so you don't actually know that stuff is broken until every user reports it like refining slumlords refinement of this dashboard.

But our manager also is having the same problems.

But like your watchdog member fails.

So you don't know says one of the things I'm just here for us.

We just we're spread thin.

So we don't have folks that are focusing on that daily.

So we just didn't notice it for like a week.

Yeah Yeah.

And then when you need it.

It's actually down.

That's frustrating.

Yeah Yeah.

Those things we can roll out and in our next engagement will.

Yeah every engagement.

We do with the customer.

We basically extend the alerts.

We extend the things we monitor.

So that might be one of those areas.

It's looking like it's going to be gas.

So that's where we're probably going to be extending our support for it seems pretty as it's maturing as a product.

It seems pretty popular and now we manage no bulls.

That's pretty awesome.

However, with yet with a spot in the ocean, which we briefly discussed last week.

Spot in the ocean is basically manage no pools for Kubernetes by spots.

And it also handles the complex calculus of scheduling the right kinds of nodes, which you can so that begs the question, is it worth a 20% of charge from Castro's cast to a decent enough managed no pool solution that spot in like where it whereas their value prop at that point.

Yeah Yeah.

So I guess the answer is it depends.

And what you can do.

Well, so first of all chaos manage nodes don't support no tools at all today.

So it was just released whereas a week ago.

So I guess that's probably coming very soon.

But it.

What I mean by it depends is it depends on your workloads and how consistent they are basically what kinds of instances or nodes you need.

Will you get by with a very with a naive kind of spot fleet configuration or do you need something more complicated.

What I what I think is so cool with respondents thing is how it sees how much memory you need.

And then it finds the right kind of instance, for the cheapest price that satisfies that requirement.

And I think that's really just very cool.

So it's not just it's not it's not just using spot.

It's also right sizing based on the instance tight.

And for that, you need a scheduler encumbrances and for that Amazon needs to develop a scheduler for Kubernetes that could do that.

Yeah, it seems pretty slick.

I want to look at it more when I have any kind of time.

Also let's see here.

So I think I haven't.

So it's been working on this.

And she just implemented it gets weird.

You a search and repo names.

That's what you meant to do.

Yeah, this is going to see since I didn't work on.

I don't remember.

Yeah, this here.

So I should publish this to the registry.

But so so we just developed this module here, which extracts the launch data configuration for cops so that you can use that together with our home file for spot insert to set the parameters that are needed.

So there's a bunch of parameters for spotting the ocean that come from your Kubernetes cluster and in order to automate that end to end.

I guess we haven't merged yet.

Since it's a work in progress.

Yeah, here.

So that other module is to get close to respond.

I don't know.

It's a demo for you for them.

I don't know where those launch I would have expected to see the launch configurations in here somewhere.

Outrageously badly beaten.

The caps stuff somewhere.

But either way.

So we actually cut bait on that.

So spot supports cops natively so to natively so to say like they have an explicit documentation site for setting up cops or tried that.

And he was.

Get it.

Cops that control the cops controller was caught something.

So we couldn't figure it out.

And even if you implemented that there were large parts of it that were not fully automated that were click ops e click ish to it.

So that's why we implemented.

So spot instead has a Terraform provider.

For ocean.

And that's what we implemented instead.

So Yeah, that makes sense for the pickups but she made us.

Yeah So this is what we implemented.

Yeah, this is I guess just a teaser.

Igor is going to present on the ThoughtWorks technology radar probably next week.

We knew that today was going to be pretty quiet given that Thanksgiving week.

So we're going to talk about that.

And then we'll probably do a better presentation on spots as well.

Sure thing.

Had a thing like it have actions have you.

Did I miss that I came in late.

This is just a place holder.

Yes, I can talk about ghetto actions all day long because I've been doing a lot with them.

And I think there are a lot of fun.

Yeah, I just I just need to actually write my first one to understand how it works Yeah, I think it's a great place to start would be like cloud costly packages under GitHub and here.

Here I implement three actions that we're rolling out for us.

Here are the actions that are workflows.

Here.

Here are the actions we're implementing for all of our repositories across the board.

For now these are related a little bit to the fact that we do a lot of open source matters.

Some of this for us.

But one thing is auto assigned.

I really dig this PR.

So on it it references a another.

Basically, you can build a library very effectively for your company with all of your actions and then reference them very tersely just like this.

We use them.

So this year will auto assign the PR when it's opened this year will automatically greet new users who open an issue or pull request with a comment.

Inviting them to join our slack community.

This year is the one I'm getting the most benefit out of.

Like all the other ones are nice, but this is the best.

I think it's the auto label or immediate payoff.

It's great because when you go to pull requests and you can look at what we've had going on here.

So every night at midnight GMT we have a process that kicks off.

That looks to see if there are any new packages that need to be updating.

This is maybe something free for use act that drew you in as pretty cool.

So because it's a model repo are packages the labels make it very clear to see yes, we updated packages.

But what was updated.

OK So here we updated cops and teleport.

Here we updated helmet file on HSOPS.

Here we updated code fresh panda dog.

And here.

It's get leaks and help file.

If there's just a coincidence that it's 2 per day really weird phenomena.

Other times it could be dozens of packages like this one thing that would really help me is if you did the same thing for like your home files repository like I want to.

Yes, I guess I totally want to do that.

I could just pin them like when I want to use the helm file.

I see a master.

I could just pin it to the newest release.

But then I lose that context.

Why am I upgrading that specific file out of your repository.

So I prefer to put it to the newest release of that helm file what actually finding that number is kind of a pain in butt.

I mean, I think you can eventually get down to a comet.

Last I see what you're saying.

What is the latest release of related to that writer.

Yeah Yeah Yeah.

Well, as always for the lazy release that contains the newest Prometheus operator or the first release that contains the newest operator.

I should say.

So if your auto label him when like a release is made.

Hey these are the.

Because you're trying to have a human process.

Remember to put like something in brackets right.

Yeah, just click a label that says Prometheus operator and see all the times or release included changes to Prometheus operate.

That allows me to have a lot more quickly go through and say, OK, this is upgraded for this.

And this has the bug fix.

I need.

That's going to there.

Yeah, that's really good.

That's a good tip.

The only reason it's not here is I haven't gotten around to rolling it out yet, but I'm going to roll that out.

That's a good tip.

That's one of those things where I guess you just have to kind of do something where you pass the file names in the changed files and make those into labels somehow something.

Yeah So we've generalized the pattern because we have.

But while we practice poly ripple across the board.

We do have a few minor repos for sure.

So we kind of generalize the pattern for that under packages you'll see how we're doing it in the makefile.

What we do is we have a target here named after the file.

One thing that's often forgotten these days because make is used so much for phony targets targets that don't correspond to files is that makes origin is actually for file manipulation or like using file modification times to trigger build processes.

So here, this is saying if this file does it doesn't exist, then I go ahead and generate it now.

So that's a yes.

This is generating the labels for that by running this target here.

And we're just the extracting we iterate over all of the packages defined here.

One thing I know I just learned.

And make is how you can define locally scoped environment variables like this.

So I would say that's the power to make a trip.

Trick and then iterating over those and stripping the slash and putting a star star.

So the outcome of running that target.

OK So then and then in here other read me dex we just trigger that as a dependency every time we make it.

And under GitHub auto label here.

You can see the ones that we generated them from that process.

So I'm just going to do co-generation like that for old files like you suggest they don't look like one that's there.

Yeah So a public service announcement, then if you were relying on our packages to be stable because they weren't updated very frequently being a false promise of stability that's no longer the case.

Now they are updated almost well within 24 hours of a new release that is true for everything except for these packages here where we're pinning to a specific major release or sorry specific major minor release these here are not yet updated automatically.

But everything else that doesn't have a version number like this is updated automatically.

So these are all updated automatically updated automatic with et cetera.

All right then let's see.

Any last thoughts here.

Just quick.

Go ahead.

Yeah, I was to ask about Terraform like 13 or how long you guys do that.

Does anybody talk about how long you Terraform what was going to be around or zero not 12.

I should say.

That's a good question.

Anybody have any Intel on that.

I don't know.

My understanding is any jump from zero told to zero 13 is going to be minor by comparison.

The 11 0 12 thing that's like got.

Yeah, that was painful that's right.

And the fashion being a major patch.

No, but Yeah.

But you know.

But September pre 1.0 is a different construct then.

Yeah Yeah.

Yeah So I know that because cloud policy we exploit that as well.

So most of our work is all pre 1.0, which means that the interface is subject to change.

And it's kind of hard to argue that our stuff can be not pre 1.0 when Terraform itself is not 1.0 yet, but you should know that you elicited kind of a cold sweat when you said zero dark 30.

No, it was not fun updating all of our modules.

So all right.

So all right, everyone.

Looks like we've reached the end of the hour here.

This about wraps things up.

Thanks again for sharing.

I always learn a lot from these calls.

It keeps me up on my tip toes.

A recording of this call is going to be posted in office hours, in a little bit.

And I'll see you guys next week same time, same place.

All right.

Thanks Thanks.

Public “Office Hours” (2019-11-20)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-11-20.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Welcome to Office hours.

It's November 20th.

Today my name's Eric Osterman.

I'm going to be leading the conversation.

I'm the CEO and founder of cloud posse where DevOps accelerator we help startups own their infrastructure in record time by building it for you and then showing you the ropes and how to do it.

For those of you who are new to the call the format of this is very informal.

My goal is to get your questions and answer them the best we can.

Feel free to unleash yourself at any time if you want to jump in and participate.

We host these calls every week, we'll automatically post a video of this recording to the office hours channel and I'll follow up with an email as well.

So you can share that around with your team.

If you want to share something that's private.

Just ask.

And we can temporarily temporarily support a suspend the recording and do that.

So with that said, let's kick this off.

So here are a couple of key points, we can cover today.

First and foremost, though, I want to get your answers question.

But if we encounter some silence.

This is what we can discuss.

Namely, there's been some work by AWS in partnership with terror former hashi corp. on supporting landing zones and support.

Unfortunately, this is only an enterprise feature right now.

Mike Rowe joined us kindly has kicked the tires of AWS as a control tower, and his experience.

Well, let him share what that was like.

And then we also have a couple announcements AWS or HBO announced managed node pools for UK s and Terraform already supports it.

So we'll cover that and time permitting, we can always go into get of actions or Terraform cloud, things like that.

All right.

So I'm going to hand this over.

Anybody have questions.

First of all, that we should get answered right.

Let me check what's going on in Slack channel 8 and have them in front of me.

Office hours.

All right.

So no questions there.

All right.

Well, then Mike, do you want to talk about your experience with e.w. its control tower because this is the first I've really heard of it from a firsthand account of user.

So I think others would be really interested.

First would you want to start with an intro of what it is.

Otherwise, I can do that.

Yeah point if you want to just do a general intro of what it is especially with how you would see it you know in terms of working with the products that you guys have put together again and I'll actually demo it and give you guys kind of an overview of how it runs.

And then I'll probably curse professionally for a few months and then hopefully that'll close everything out.

So all right really cool self for those that you don't know clown posse.

We do a lot of Terraform and we also have a project on our GitHub called reference stash architectures reference architectures and this is what we've been using to support our own consulting when we onboarding new customer basically lay out the US account foundation and do it in this opinionated way where we use one GitHub repository per AWS account.

So basically treating IBS accounts almost as an application in and of itself.

This process is not perfect right now.

Anybody who's tried it has probably run into some challenges because there's a lot of edge cases.

There's a lot of rough edges when you're working with AWS accounts namely they're not like first class ID or for automation you can't do many things that you want to be able to do like you want to be testing these things for the ICD like you'd want to be able to destroy an account and bring it back up again.

Well, you can't destroy an account and Terraform unless you first log in with a whole bunch of click UPS and accept the terms and conditions, and all this other stuff.

So that's meant that we can't automate basically of US accounts to the extent that we would like.

Well, good news is Amazon came out with a couple things.

One is this concept of landing zones, which are like eight of US accounts that are pre provision with a certain configurations and settings and pretty turnkey and the other is their control tower, which is basically a product designed to provision landing zones as best as I understand it.

This sounds really cool on the surface because hey, this is not something that we technically want to continue supporting like if Amazon just made it easier to create AWS account architectures like a vending machine that be awesome.

So I've been holding out hope that we could use this control tower.

Well, Mike here has reached out to us because he was starting down the path with a new company two sets to provision the reference architectures and I said, hey, wait before you do that.

Why didn't you check out control tower.

And let me know what you think about it.

I thought I was doing him a favor.

So we'll see what he actually thought about that now at least I was willing to jump on the call.

So it doesn't show.

I'm totally pissed off at Erik let me down, down this route.

But So I had a clean 8 abuse account.

And I thought, you know what.

I needed.

I wanted to provision individual accounts for each of our engineers to have their own sandbox and keep it totally isolated from everybody else.

You know it sounded like the ideal use case you know we're using this.

So am I able.

Can I share my screen.

Eric Yeah.

Let me stop sharing.

Go ahead.

All right.

So let me see.

Just a reminder that we are recording this.

Make sure you don't have any secrets on that Yeah I think I'm on Watch that.

So Yeah.

So let's say interesting perspective.

The fingers on the keyboard.

You know.

I don't know why they put my the doesn't you know that the video camera down at the bottom right at the base of the screen it makes no sense.

So on it.

All right.

Are they gone.

What do you guys see and are you seeing my eye to see I see your smug morgana.

All right.

So we're can I get this to this.

I thought I was sharing the screen, maybe I'm not.

There we go or you get my screen now.

Yeah, I see it.

I see some.

I see the single sign on with eight of just counting the cast below their right.

So the first thing to keep in mind is once you install control tower it goes through and provisions and all of the account log account automatically for you.

So you know one of the things that I actually posted him.

And one of our select channels is do we still need to go and increase your account limit.

And the answer right away is yes because the minute you install control center your control tower, and provision it takes away two of your four accounts.

So you're you go through with a clean account.

And you can at most create two sub accounts.

But So you've definitely got to start with a increase in your limit out of the chute to do anything worthwhile and just add some context there for those who aren't running like multi account AWS out of the box is saying basically you sign up for AWS.

They give you four accounts.

And if you want to provision more.

And these days like we typically provisioned like seven to nine accounts out of the box.

Well, you just got to open up a support ticket and say, you need more accounts and add a little justification of why that is kind of ironic though, because they built this thing out of his control tower for the perfect checklist of like vending accounts and AWS is crippled by out of the box to even use it.

So kind of weird.

Yeah And what make.

And another thing I found out which they don't document anywhere is there's like a magical number of 10 accounts that they could do quickly.

And if you want to go over 10, you've got to do a special justification for us.

So if you're experimenting if you ask for 10.

You can get that easy.

I asked for 15 and managed to get 15 although it took several back and forth.

OK So.

So it uses a W a single sign on in order to log into all your accounts.

And so I'm signed on as our master account here.

And I don't understand how this whole thing works.

So if you guys will forgive me that it looks rough it.

It is.

And I don't quite have a full understanding of it.

But But let's say I want to provision.

So So this is you know it creates the master account an archive log archive and an audit account.

Now let's say I want to provision a new account.

So this is I need a new isolated account.

So I fire up the management console and it basically does the assume role within this as the administrator.

And so in order to actually go provision it a new account.

I came into this.

And I was like, OK.

Well, let's go to control tower because that's what I just installed right.

So we go into Control tower you muck around in here for a while and you realize this is just the you know you can brew peruse all of your organizational units and things like that.

But it doesn't really give you.

This is where you can configure it and look at all the guardrails that you know that they and guardrails are their way of saying this sub account that you create had these different restrictions placed on them and they have some standard guardrails that you can't create public read access to log archives and all this other stuff.

So you can really drill into all the restrictions that you place on these sub accounts.

But after a while, if you're trying to create a new account.

I finally realized, oh, I don't do it here.

I've got to go to a different Amazon product called the service catalog.

And so the service catalog is actually where you can launch this you know this product.

And so this is an apparently there they're setting this up so that you can create different portfolios with different types of products.

I'm just using the standard product.

And so if I want to create a new account.

I've actually got to go in to the control tower, and actually launch this product.

And so I'm going to I'm going to walk through a hypothetical I won't create it at the end.

But let's say I want to create a test account.

And now we start to get where it's interesting.

So the single sign on e may l this is using their single sign on corporate directory.

So I know I could use my you know might screw it in my test email.

Now, the next thing is when you're creating an account.

One of the things with Amazon accounts every Amazon account has to have a unique email.

So you cannot reuse an email, or it will.

And if you do use an email that's already in use it will go through the entire CloudFormation process of setting this up and then hope you an error saying, oh, that email is already in use.

OK to further tell you how rough around the edges.

This is if you hit your Amazon limit, it will go through and try to create your fifth account and just give you the very helpful message.

Internal error and it took me 50 minutes of googling when somebody finally said, oh, yeah.

You need to increase your account limit.

So you know what I've been doing is I've been just creating this adding something extra to my email to create it.

Right And so now you do have the nice feature here that you can reuse your same single sign on.

You know you can standardize it on your single sign on email.

But and have a different account email for the different account emails that you're setting up.

So you can have like one single sign on and free accounts by changing you know whatever you.

Let's not do this period.

So you know and then you know whatever mind my test account is you can tag it you know as you would expect you want to have s and s topics to be able to and s.

And then you now can launch this, and this will after about 15 minutes of chugging it'll send an email to this this email and say, here's how you log in.

So let me go ahead and log out of this.

And I'm going to go back and actually sign all the way out and almost sign in just as me as a user.

And so let's sign out.

And by the way, I have not signed gone in and set up my MFA for this.

So right now.

I am just doing this.

So when you sign into this new account.

It's not even the standard Amazon look and feel until you drill into your different accounts.

And so this is my account that I want to sign in as my admit.

You know as an administrator and now I'm into my account.

So that's how control tower works.

They've instituted this kind of strange UI that is sort of confusing to manage it, especially when you come at it from the past experience of using us in some ways, it is cleaner.

It's just that it's different.

So one question I have.

And I don't know if you know the answer.

But with AWS counts we use provision and we even like using Terraform or whatnot.

And you specify that email address each one of those accounts actually has a master account associated with it.

And if you don't go and do a password reset on that whoever does can now basically initialize that, especially if you're using a role account like a role email address.

They can now do a password reset on that set up MFA and basically get you locked out of the root account at the master account level.

Have they done things to prevent that yet.

No no.

All this is really doing is really managing your data.

Yes organizations for you.

Yeah And so each of these accounts that you set up is a sub account, and it is you know that that's all it's essentially managing as part of that.

Yeah All right, let's.

That is interesting.

So thank you for doing that.

Anything else is part of we want to show or.

No, not really.

I just it's you know if it works, it takes some puts and around.

I wonder now if it would have been better to go through with Terraform you know, for me, I think, using form and kind of the reference architecture might have been easier.

But this in the long run might have been better for other people to kind of step in and start to manage.

Right oh, you know, I'm almost of the opinion that this is you know, it's good if you're not a tier form user, but it is definitely not turnkey.

Yes it's got to work through it and kind of figure out what indefinitely click jobs.

Yeah Yeah.

A couple questions I attended a little bit late, but it looks like they Stole that UI right from one log in because that's one logins idea of, here's a list of rolls.

You have access to.

How do you.

How does it know what roles are giving users access to.

That's a good question that I ask that, because I'm curious if you could set up the so outside of their control tower, and still and the like still provisional your accounts and everything with Terraform you'll make the roll maybe it's just a tag on a roll or something that gives it access that it'll show up in that menu for people that that's a group that's a good question.

I have not delved that far into it.

So OK, fair enough.

Yeah And you are you already had your master account set up with some level of single sign on or no, actually no.

OK I started with a fresh account and it's set up single sign on and everything is part of installing terror.

I mean, not the control tower can you just authenticating a guest Active Directory or somewhere that.

Now this is actually authenticate.

You can.

But when it's set up single sign on.

It's set up, it just an Amazon single sign on.

OK single setup.

So bull well.

So then that that's actually a nice segue into this.

It was posted in office hours earlier this week.

Just wanted to call it out to everyone else, this is interesting because if you're in l.a. and you went to the west l.a.

DevOps meetup there was this was hinted at it was coming.

So AWS some professional services team inside of AWS together with hashi four has been working on adding some support for landings and landing zones are kind of like what control tower provisions.

So it's not the control tower piece is that it's that sub piece.

And apparently within the enterprise offering of Terraform cloud.

Unfortunately, because it's a integrating with sentinel they have some support.

Now for landing zones.

I haven't dug deep into this.

But if this is the stage that you're at.

It's kind of interesting.

Maybe to be aware of some of these things going on because they could have long, reaching impacts on provisioning or AWS accounts.

But what's exciting about this is that there's a general movement towards vending machines for US accounts and some of the interesting core concepts that it brings up here is that you have your core.

Sit of accounts, these are the ones that you initially set up.

But then throughout the course of your organization's life, you're going to be adding additional accounts, perhaps for developer sandboxes for individual apps that have different compliance requirements and making that more turnkey is the idea here.

So Yeah, they talk about your core accounts and baselining the settings for those.

And then also your baseline security period has anybody gone deeper on this than me has some interesting insights or things that they got out of it.

I just think it's a little bit funny that they should choose the vending machine matter for a number for a number of reasons, but one thing that comes to mind is Steve Gibson on his security.

Now podcast vaguely familiar with that definitely heard of it.

I haven't been telling him Oh, it's in his 13th year.

It's an institution in the DevOps community.

I mean, like old school older school shall we say anyway, he's got us a couple of episodes where he talks about the problem of securing the vending machine.

It's sort of a it's one of those holy grails of the perfect.

OK Figuring out how to have something out there can dispense assets in a secure way.

But yeah, I can I guess I can see that I also like challenge that because like through automation and repetition that's how I think you achieve greater levels of security and predictability.

So therefore, with this vending machine and model that if you can get it right.

You can at least stamp that out.

But if you make a mistake in that template.

Yeah Then you've rubber stamp that out to a dozen accounts and you have that problem.

Somebody saying super is the top bezel oh no that's unrelated to before sir the sidebar conversation here is on the laptop with the camera in the lower corner of the screen.

You know that that would be the most exciting part of what I talked about.

I have a hard stop to it for you, Eric.

So I'm OK.

No worries if you got a drop off.

I totally understand and thank you for stopping by my.

Yeah, no worries.

All right.

So other exciting newsgroups that was jumping ahead of ourselves here.

Yeah, the exciting news is the cast managed no pools as we all know and been frustrated with Amazon's case was crippled from the start by only managing the masters.

Now that's no longer the case in life with GKE.

You can spin up manfully manage load pools.

Now It remains to be seen how our life has turned out to be in the beginning.

And what kinds of issues people will have.

But at least we finally have representation of us.

Yeah, I just noticed that you've been I've been reading about this picture.

So yesterday I read about it.

But I think it is kind of like Cuba and or just Kailash.

Great name instances.

But right now.

I think it doesn't support the poppy stuff yet basically.

Yeah, that's what I think though on my setup like you know I always use the spot.

Businesses to do a some no love love money stuff like, no, it's not important.

So I can't go away.

But here, I think it doesn't support.

It's the spot instances yes because there's no option in the CLI or no option Terraform I've checked the telephone message by the way, I'm very surprised that actually Terraform acted very quickly on at the same time.

Maybe it like no threat was later for the first time that they just released that feature that they did release moms like, no, not even more than 24 hours ago.

Yeah Yeah.

That's really cool.

I think that there's more and more collaboration first of all between AWS and how she caught it.

So I would expect more of that happening.

Aaron sorry I'm having a desktop issue here.

I'm trying to get any anybody have any questions related to Terraform Kubernetes helm general DevOps automation release engine logging SRT Prometheus Grafana you name it.

One input from me is that I think the control towers.

There's still no option to activate or provision.

This is MFA I mean, you should do it manually.

But it's not only the control tower lets.

Is that true.

Because I looked into that.

But yeah, you can do it, man.

And afterwards.

But not with the product or not with the service they provide.

But now it still needs to do some kind of stuff.

Maybe even with their own MFA stuff.

But right.

Or for example, third party integration.

Like OK, whatever.

You cannot do that.

I think.

Yeah, that sounds interesting.

Anybody know about that.

We lost Mike.

So anyway.

Yeah, thanks for bringing that up, I'm not sure about that.

But, you talked about using a spot instances.

And like that you don't like that, not a pool don't camp or not group.

Yeah, for the kids not groups.

Yes, I think it's hot stuff.

But it's still a chick sport teams don't call service.

What Yeah.

But I wonder if I.

That's a good question.

I wonder if spot has already responded with something about.

But I think I'm thinking the fundamental underlying technology doesn't support it yet.

Eager I'm not sure, because this literally was announced like a day or so ago.

Yeah but what is this.

I always go to Terraform resources you know.

I just came around that e and I couldn't able to find a note the directive saying your spot instances yes or whatever.

So there's nothing I you only give the Sti size no minimum and maximum.

That's it.

Can you tell me what.

So what is the difference between note groups of excess and let's say, of scale.

Group with lounge chair set at.

So this is like GK Now or Azure or Kubernetes.

You don't have to regress as an old serf community that's right.

That's their main difference.

So I think maybe ego.

I wonder if what you're asking is or I should say maybe what would answer your question is when it s manages a group.

It updates like the symbolic links to security groups and all of these other metadata things that go beyond just the auto skill group identity that what you're asking probably right.

I still don't get or doing so.

There is no group an auto skill group is like a it's like a list of machine types with some parameters around you know their properties.

But it's not.

It does not include these the access control lists and the security group memberships for a particular cluster.

Yeah, exactly.

You have to provision all the security groups.

You've got a provision the auto scale group like this.

So this is our Terraform made up.

Yes case workers module, which is what we have been using to provision the node pools for cats.

And then this lets us specify a few dozen different parameters for that.

Now instead there is a first class primitive eight of yes yes.

No group that replaces our module basically.

So instead of using this module to provision your workers you can now provision this this resource instead.

And that's that maps once one to a resource managed by AWS of course.

Now we can.

It looks like there is a lot less configuration options here.

So it looks pretty basic by comparison.

If you want really fine grained controls, then it looks like you get a lot more of that when you're using the raw underlying technologies.

The building blocks like auto scale groups.

Is it true.

Eric or it is it might it be the case that all of this ultimately translates or boils down to some photo or photo three primitives in the one in bodo is just building on the APIs that Amazon provides photos like Terraform in the end.

I mean.

Oh, I see.

I see.

I thought botha was essentially OK.

Yeah Photos just a library for automation.

And then you know how she corp. uses the eight obss decade to achieve something similar.

So yeah, you can say related to that though.

So what's interesting is like back in the day, like when they first announced case there was discussion of each case farmgate.

So Fargate style Kubernetes, there's been no mention in this that there's any farm gate relationship to this.

I'm guessing it's all classic.

You see two instances under the hood Erica CenturyLink.

Can you open to share as on your screen.

Yeah, well, first of all share you share the links in office hours on suite ops.

That way everyone can see it.

Oh, sorry.

I'm sorry, wrong link.

All right.

So spot spot is.

Can support something similar.

Probably it would be good.

Like for me to make a demo eventually.

Mm-hmm about it.

It's not the same as node group.

But probably covers a part of its functionality.

So first of all, from your eye it might have integration of these Amazon cast but from Terraform point of view, it have some resources.

I'm not sure how it like interact to is from your point of view.

But if you will look here.

So it creates and spot things ocean on us, which means you can set whitelist at list of instances that can be started in as a bottom of the scale group or something like that spot the ocean in a specific subnet or you can specify I think a little bit more context is do though.

So sorry guys.

We've been doing a lot with spot in the last two weeks.

This is makes all sense to us.

I just wanted to give a quick back and though what spot in Stockholm is so many of you, I'm sure very familiar with the concept of spot instances on AWS basically pre-emptive all instances, on a marketplace where you can bid on compute.

And so long as your bid is at the market rate you get those compute resources, but they can be terminated at any time unless you reserve that for some window of time from one hour up to six hours now with spot fleets Amazon has made it easier to manage pools of spot instances.

How ever.

When you look at the fully managed service of spotting it makes the spot fleets and all that stuff look like child's play spot and does a very good job at making it clear how they're saving you money, how you can save more money giving you visibility into your Kubernetes clusters what's costing you and helping you optimize that stuff.

Now I'm not ready for a demo on spot today.

But we will have one guy in our future office our session probably led by Igor here who's been doing a lot of our work on this.

So yeah, that that's just an intro into that.

Now what Igor is talking about right here is the Terraform provider for spite angst the managed service to provision all that stuff as code.

Yeah So a general idea is that you integrate spot instruments your idea loss account and carbonates cluster and zen proteins to manage creation of nodes and adding zam into your carbonates cluster to have the lowest price.

All like best availability with the lowest lower price.

And if you spot instances when a price goes low as an on demand if spot price goes up, then the worst case, you will have on demand price.

And so that can be very useful to a case that you describe.

Some of the other cool things like spot ins will do is it knows the probability that instances will be terminated and can preemptively start cordoning nodes and draining them and moving pods to other nodes you have better availability.

And you can also, if it deploys a service inside a community.

So it knows how many core are requested and how much memories requested.

So it can actually right size your instances on the fly.

So that you're not over provisioning machine types.

Plus plus using like the concept of fleets it can deploy lots of different kinds of EC2 instances with different bids.

So the chance that you lose your entire fleet at the same time is very low.

And then there's a lot more behind the scenes.

I think that that goes into this product can do.

You need to buy it and you need to buy that separately.

Is it like wasabi or can you actually just do it through your straight regular w.s.

It's a separate SaaS you sign up for it you go to spot inside IoT and basically it's free for what is it up to 20 nodes.

I think.

And then after that, you pay and you pay and any charges you basically 20% of the savings.

So depending on how you look at it, it's basically free as a product.

My god that's brilliant.

Are they hiring pretty brilliant model.

I wish I came up with.

Yeah, I wish I was my product.

Right let's see what else we've got another about 10, 15 minutes here to answer any questions if anybody has joined and has questions related to Terraform communities.

Helm helm 3 is out.

Anybody using helm 3 by the way, I should have had that on here.

How is that going you're muted by the way, I think it works.

It works great for me.

I was using him to learn before.

So local to her.

So it didn't change much.

Yeah Did you migrate existing workloads down 3.

Not really.

I just tested some stuff in it.

So I just tested on a new test server it just works.

OK this stuff works.

I read it.

Did you see the update.

Update for her I'll notify her.

Yeah OK.

Yeah OK.

Sighs I was just Pippi out here a little bit.

Hey, guys.

Peer here is working on an open source product is really cool.

It's called Hell no to fire.

If you're using Kubernetes and Hellman haven't checked this out.

It's a great way to know what is happening.

So if you look at any chart, you can compare multiple versions of that chart and compare that you have to take.

Yeah Now, you can prepare.

OK So I say the UI has changed a little bit since I looked at it last but here's what's pretty cool.

So if you're currently running, say link or Grafana version 3 to the left.

Then and you want to upgrade to Griffin version 4.0 what's going to change.

Like it's totally opaque today.

Well, not with how notify or with how notified you can see all the changes in this case quite a lot, because I jumped major versions here to exaggerate like the changes.

But here's everything that would happen if you went through that change.

So let's think about the ecosystem system.

I love it.

I think it's amazing.

Yeah, it works.

It gets a search function back.

I search the front end to view jazz.

So it's no.

If you add somehow as a compare function was skipping you to find out, which is kind of weird.

So I will just find out my go to test tool kit test case.

So we'll have to check where that is.

I just want it.

I have a question.

I guess.

So let's see how it was.

A diplomatic way of saying this.

Let's you say that I there are a lot of things that our product team.

I think overlooks and neglects.

And so I'm kind of running a little bit of a shadow development organization within the company.

One of the things that one of the things that we have a health chart to install our collector and I wanted to basically clean it up a little bit and kind of make like an exemplary help chart you know kind of like how like until Ben and Tim Blanco's Terraform stuff is always like pristine and fabulous.

I want to do.

I want to be like him kind of like, yeah, there's room for there's room for more more excellence always.

I see what you guys doing.

Where does one go to learn how to like build beautiful home charts.

I think it's hard to say anything in the chart realm of helm is beautiful.

No, I. Yeah, I would like to do a shameless plug for our monocytes sharp mono chart, but I would say that the only thing.

Beautiful about mono chart is it has proven wonderful for us in our consulting and to use.

So we don't write as many charts, but if you look behind the scenes of mono chart mono chart is a master class in how to use helm.

But it's ugly as sin because it's hell.

So let's look.

Yeah, I can just show you guys quickly what I'm talking about there.

Yeah Yeah.

Did you go to cloud posse charts under a beta we had this thing called mono chart and the reason mono chart exists is we just discovered that there's very seldom justification for developing a new chart for most apps so long as they follow a pretty standard interface right.

Whereas oh so instead, we think about charts as interfaces and the chart is like the interface is like one interface to rule them all if you go in.

So here's an example of how it can be used in our values file here.

So for example define a Docker config with these settings.

This is going to be the image for our chart.

Here's a config map it supports all the best practices of supporting annotations.

It also supports mounting your config maps to the file system.

It also supports defining environments and files.

So it supports the three most common ways that config maps are used then help.

And it repeats that for secrets and are added for inline environments.

And then we support the common primitives of deployments and daemon sets et cetera.

So it's kind of like a dumbed down the description of a Kubernetes resource the way that they're most frequently used within help.

But now if you dig down into templates and then you look at how this looks behind the scenes to template size of this stuff.

Yeah, it gets pretty gnarly and you know this is ugly.

Like I said ugly as sin to templatize all of that stuff.

But it is wonderful because I think in a way.

Part of the problem is Helm it.

There are little decisions that people that humans need to make that they're not generally well prepared to do.

And so if you could automate those decisions.

Exactly right.

You know.

So this is what I I totally advocate that companies do.

So if your company is developing microservices and you have like you know, 15 different microservices and you're advocating developing 15 different charts for those services.

It really begs the question, is that necessary.

Because detecting the differences between all those charts is very difficult.

So we use motto chart all over the place to deploy services in our own things.

So Cloud bossy.

Files I'm sorry.

Go on.

If I send you a link.

Well, it was an example.

Oh cool cool.

You sent a letter home.

Ice so Yeah.

Igor in the office hours channel presented a link on how you can use monarch chart to deploy services on Kubernetes.

But here just a quick Google map.

Google Google's now been synonymous with searching.

I'm not even on Google here.

I'm on GitHub.

Just a quick search on GitHub of people shows you all the different places that we've used monarch chart in place of developing an original chart for you.

Basically just passing by is down and the moonshot has halted configuration.

OK, that's cool.

Yeah, exactly.

And in the extreme case if you guys haven't heard about it.

There's this thing called the Ra chart in the right chart for a group of 4 helm is the ultimate escape hatch.

It just lets you embed full on cougar 80s resources does values.

So now you can use the helm as a package manager.

But all these other raw resources that you want to deploy.

This is like when maybe a vendor gives you some raw resources.

But you use helm and you want to see that.

So the raw chart is is that.

So mondo chart is somewhere like the raw chart.

It's just that it's a whole lot more opinionated.

What I can say about how to do charts in general is when I teach art.

I talk about it.

It's all the time, just like uses linear templating as possible.

It gets super confusing if you have to maintain it.

I mean mono shots are a nice idea.

But in general.

I mean, we rarely really use a lot of templating so we just like do really, really basic stuff going forward.

If you're doing something little drips.

You're doing something else.

This module.

Well, yeah.

Maybe a long lines of raw almost or you could use wrong.

That's a you could use raw as your what do you call it as your chart dependency.

And then pass your values to right.

Yeah Yeah, maybe.

I mean, I'm one of those tribunal has eight lines.

So it's super, super basic.

What we try to achieve have you tried just that based on that, I have a case on it.

OK So today sort walks her legs and you take her bar.

And as a one things that Sarah come into stop doing is template and you smile and say, I just described set how the scene is at the helm doing is not really cool to play the piano and they suggest to use on net as a special language for templating.

Jason and I guess so.

That's not a surprise that gamble is a super setup.

Jason So that serial is the same.

And as I get home three stops supporting different template languages and probably it would be initially, there supporting Lua and there was discussion back and forth.

If they should support Jason it like I'm on the fence like Jason it is like a leaf in bounds nicer to look at them say CloudFormation but I still don't like that as a developer.

Maybe I'm going to take me a while to warm up to it.

So I do kind of like the Lua the resolution to use Lua over Jason at Indian but I mostly Wah in said chase and that really requires to write.

So I'm not sure that mother chart written on Jason it would be looks better than on yellow.

Yeah or.

Yeah Yeah.

I don't know.

I don't know.

I think where it where the power might come in is in the ability to create basically libraries to generate common kinds of key functionality behind the scenes.

OK has anybody tried the Lewis stuff with home three.

I mean, is it looking at here now here.

So I say to yourself say that one more time.

I mean dirty is tough sell off on Wall st.

It's all said I couldn't stand it.

That's fine.

No All right.

So yeah, this is old.

I'm not going to bother sharing.

Now All right.

Well, then that pretty much brings us to the end of today.

Thanks, everyone for coming out and sharing sharing your experiences.

Mike Rowe for sharing your experiences with control tower.

We will have for next week.

I think ego is going to join us at you are we saying something.

No, no, no, no.

So your next week.

I think is going to be talking about the ThoughtWorks technology radar.

I actually think we're going to be picking off a few more things off that ThoughtWorks technology radar.

It's really cool.

They do a pretty good summary of the technologies out there and what you should keep an eye on.

So anyways thanks everyone.

Talk to you next week. same place same time.

Bye Bye bye.

Public “Office Hours” (2019-11-13)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-11-13.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Then with that, we will kick this off today is the 13th of November, we're doing office hours here for the sweet ops.

Unlike other times we actually have an agenda some talking points.

We don't have to cover all of these things today.

But if there is these are just to keep the conversation flowing if these things stand out Obviously, the number one priority here is for us to answer your questions whatever those may be.

But here are some things that I just wanted to talk about.

So the first thing I'll just cover is about some suite ops.

If you're in our slack teamyou'll have seen some activity there.

Basically what I did was I renamed the general channel to announcements because on a free team that is the only channel, you can restrict posting on.

And it's only channel.

You cannot leave.

And then I created a new general channel invited everyone there.

So that's where we can now have conversations related to everything else about DevOps.

It doesn't have to be topical like all of our other channels are.

It's also a good place to ask questions like, if you don't know where to ask a question.

All right.

So with that, I'm going to turn the mic over.

Open the floor up.

Anybody have questions, problems that they're dealing with Terraform Kubernetes helm.

Interesting news.

You've seen that you want to share.

Do go.

Go ahead yourself and talk vigils.

Did you also change agreed.

But because I notice it's changed the logo on it.

Least are paying for it.

You know.

So I started using Freeport before I became like a master of zappia.

And now it's just easier for me to use that beer than it is use greed.

But look, they have a good product.

And you know I want to support them.

But to do what I want to do is that I'm good at because we're actually copying that, and we started using Reebok in the company.

Oh, and when I saw that you are you actually started using your own logo.

I said to myself, OK, we might need to pay for it because is this kind of looks cheap on a company.

I hate I hate the Greek logo period to be honest, I really do.

I don't mind.

I don't mind it.

We're actually thinking of naming RRR next.

But after one of the guys that it's leaving the company now.

Yeah And a match to him because we want to start using Fogel as well.

But the words were just to see if ever we develop our own bot or we start using theirs because it's about $5 a month.

So it's not that.

So it's not for focal heavy met.

I think it's Vlad had you rejected.

Not yet.

So it's Vlad shoals Berg.

I think he's the one of the founders of vocal and he's en suite ops.

You can reach out to him if you have any questions.

Awesome Thanks.

Yeah well.

Any other questions.

Oh great shot.

I'm sad that the logo doesn't show that well in dark mode.

Oh, that was a consideration.

I just want to say.

And I was running dark road for a little while.

And I still do it on my Mac to be cool.

But to be honest, where a lot of things.

I just think it's I find myself constantly squinting at things.

So I've gone light mode for Slack.

We also implemented today are ss within Active Directory connector in our billing account.

So we're starting to actually having to redesign all of our policies and group.

I am groups and roles.

Fred yes.

So here's a guy that actually did it.

Brave dealing with all that I am.

Yeah Yeah.

Very, very lonesome work because you did it.

You did though all by himself.

So the process of that, which is not bad.

But you know, the more I like when I talk about stuff.

And I realize this is a little bit extreme for some, but I am.

It is great for managing your services, but you should almost never even be for humans because everything you do is going through your get UPS workflow and you're actually using your governance model for your source control and how powers are applied to regulate what actually happens.

And that eliminates a whole class of problems managing complex iam rules coming up with a matrix of groups.

And IAM permissions and policies that you need.

Right the thing is the thing is that even know we want to go ahead and eventually migrate the whole company to that that did structure based on using Terraform and either Atlantis or geodesic as we use it.

Right now we have a huge chunk going to company that it.

So I can not make an application in a single IBS account.

So until we migrate that monolithic account and slicing up.

I we will have to do this.

And besides also some of the developers still find useful.

The good tool of being able to access the tardis console and looking at logs and things like that.

So speaking of which did you see the post I forget who it was shared in the yes channel today.

It's really cool.

It's how to detect non content.

Like it.

Yes, we've had constant requests and alert on those which is kind of cool.

I think Lauren Lauren like money many elections.

Yeah, I had a meeting with some of the guys from the US and there's not much I can divulge.

But be alert of the rim Ben that is coming up because there are a lot of things related to import and that we will all be very happy to use this.

Basically, if you interact with a lot of VPN as for providers and other people.

Yeah Are you going to read them not myself.

OK One one of the other managers is if it drops out, I will be going.

And I might also be absent from work and appearing in the event anyway.

I don't know really.

I don't know yet.

But I am not yet booked but I do.

I do want to go.

If I can't.

We just our son was born a couple of months ago.

And my wife will kill me if I am gone on my own.

Is how the other two bonds two months.

Oh no.

The other cool thing I got.

And the other cool thing I've seen is you will somehow register all your resources that have been provisioned using your blessed method, whatever that is, whether it's tags or some other method Terraform state.

And you have a system going.

Sure checking every five seconds or whatever.

And just automatically deleting right.

Anything that's not.

I don't think.

I don't know anything about that.

You know use config and then you can have policies that enforce that, which is really neat.

When you put it together with database labs.

Yes benchmark.

So somebody brought a prowler which does something similar to this.

This is similar ADA.

Yes security benchmarks it's a.

It's implemented with CloudFormation and you can just deploy this to your account and it sets up all the continuous monitoring of your rules and stuff we came up with.

That's why we have a tear from module 4 CloudFormation.

So we could deploy that.

Yeah I'll share these is that kind of what you were thinking about Andrew or I don't know the underlying technology.

He was just telling me about how you know they could just anything that wasn't provisioned using the blessed method would just automatically get deleted immediately.

Oh, actually enforcement of the Yeah.

You can't.

I don't think you can do enforcement of it.

Yes, I can take.

I think config only will show you how compliant you are against your policies.

Not sure if it prevents.

Oh no his was his was Yeah.

His was active.

It was if you create something in the console, you know like this thing that somebody posted earlier where it shut out a Slack message or whatever.

Instead of shooting out a Slack message, it would actually just delete the resource.

OK, well, this is a third.

This is a software or open source that he's deployed for that.

I don't know.

I doubt it.

It was for a large financial institution.

So with ADP a service catalog, you can define the services that are permitted for native US accounts and anything that goes outside of that is not possible.

But it was too.

It was to enforce you know OK.

We have a policy of everything must be created in Terraform or whatever.

Yeah you know using our using our corporate you know workflow pipeline or whatever.

Yeah And so you had to have you had to have your.

They called it the.

Like break glass credentials where you know you could get in there and fix shit if things went wrong you know.

But you know it had this thing running all the time going it does get created using the blessed process.

No kill it.

That's really.

Yeah, that's really cool if you fuck you ask your friend.

What they're doing for that.

Yeah Yeah, I think jumps out at me.

I don't have anything for that.

Anybody else seen anything cool related to that policy enforcement right.

Well, let's see here.

So going looking through the agenda here.

Big guy.

Yeah quick, quick thing.

Those Kubernetes that have been cursing at helm for the last few years about the tiller and the security implications of that can finally put their finger down.

And Hillary has been officially released.

So that's cool.

I believe Helen file.

Now does support it.

But we haven't we haven't yet.

Kick the tires on it.

So I will confess to that anybody using held three yet because the beat has been out for a while.

No, I'm still I'm still on like hell I'm to dot forever ago because my company is slow.

I was looking at it, I was looking for something to give a talk on.

And I think you just gave me some Excel.

And it's a good excuse to get me exploring on three.

Yeah, I believe that's it.

I just posted it.

Oh cloud custodian that does ring a bell.

Let's see here.

No, I haven't started.

OK, cool.

OK So since they open source it and everything I can tell you.

He worked at Capital One and they were doing it at Capital One.

Capital One was using cloud custodian.

So Yeah, they go.

They open source.

That's on a lot of talk about it.

That's pretty legit.

Very cool.

Thanks for sharing.

I going to put them up speed UPS office hours by the way, if you guys haven't joined the office hours slack channel in suite ops yet.

That's where we're sharing all these link.

I have a question.

The to do any of you use any self service password application that you might want to recommend related to these 80 active every single sign on that we're right now, just trying to implement.

I have a current I currently have a BHP application that I have that allows me to reset passwords on Active Directory, but rather not use anything like BHP if I can avoid it.

Yeah, so we haven't used it for this use case yet.

Keep look, it's pretty awesome.

I don't know if I don't know if Qi cloak supports password resets on Active Directory.

It's closed.

Let's see here.

Yeah Looks like there's a solution with this.

So I mean, it seems like using Qi cloak would be a pretty legit opportunity.

So you know we use Qi cloak for our IDP gateway it looks and this is it base on balls.

So Qi cloak is by Red Hat.

It's in Java.

It supports every IDP imagine every other IP imaginable.

So if like I go to Portal here and log in.

You see I'm presented with the login screen.

This here is for Active Directory or whatever custom back end, you have.

And this here is for single sign on and you can integrate it with any number of a single sign on provider.

So it's cool here is our customer that manages their end and we manage r and OK.

And we have to have our own server for it.

Or is it.

So yes a key look is I said, open source looked at work did you run it yourself.

If you want a marriage service would you like to Yeah.

Because what you thought about October or 0 0.

The only reason for using zero was that it was it's made right by a guy from Argentina.

That's where we're from.

Oh cool.

Yeah Oh, that's it comes highly recommended.

That I think is the 800 pound gorilla in space you know.

So I keep close comes into play also from an economics perspective.

You know.

Especially it gets very pricey key cloak yourself host and self manage.

I have no idea what you're increasing the attack surface obviously by using open source.

Off the shelf stuff.

But I haven't been critical CVD for some time.

And it is supported by Red Hat.

Make maintain.

OK Thanks I have to get you released to have regular people looking for it.

Well, for what it's worth you know like everything else, we have under a cloud a posse helm files we have also are distribution of how we install key cloak.

So this is a way to get started faster.

So what's cool with using key cloak as well is you can use it together with a gatekeeper and gate keeper becomes your identity aware proxy and then you can secure all your apps behind that have you felt about gatekeeper.

Last time I looked at it, it felt like a dumpster fire painless.

I don't know.

I mean, we haven't had any problems with it.

It was, as with everything.

Open source.

The challenge is initially getting everything working because documentation is out of date or nonexistent or conflicting.

So once you find that once you find a working recipe for it.

What's really well.

And we can just.

So this is all behind gatekeeper right now it's all protected by my single sign on it.

And that includes the Cuban ID dashboard.

And this is using my role with gatekeeper to authenticate through key cloak to the Cuban ID dashboard and the roll bindings there to map my ADP role to keep to Kubernetes.

Yeah, we're seeing one of those fun dropped.

Really good.

What was that one of those fun drop Downs that I don't have.

Oh, this is the latest version.

So if we just updated key cloak sorry, folks or rather forecastle to the latest release and then you can associate metadata with the services in the portal.

What's also cool about the latest release of forecastle written for castle but pronounced forecastle is you can now have a C or D So you can this whole menu is managed using C or D for forecastle and that's cool.

If you're using things like Istio and virtual services or virtual gateways and you can't use the same ingress annotations that forecastle is designed to use.

So now you just deploy the 4 castle app CRT and you can add anything you want to the menu here.

Which is cool.

Also for like external services and things like that.

So clicking on this takes me out to page do.

Any other questions related to that.

I was a rabbit hole there.

I have a question about this and that guys.

My name is Rocco.

I'm from Romania.

Thanks it's my first time. this them.

Yeah, good to have you.

Thanks something now implementing what I was doing about a man in the Navy.

Yes And we'll go for that.

We go production later next year and not get using the North region in Stockholm and that they.

That is available.

Yeah, I'm not notable to have more easy on the cluster.

And the reason that I need multi g at least for now is only to get the staging of programming quickly as possible for the logs because we have logs from engineering.

So from application logs some and we are sending depending on which application of the logs.

But assembling them from the elastic index.

Gotcha OK.

So it sounds like you're not using Kubernetes.

First of all.

Yeah not OK.

It's supposed to be OK because like if you were using humanities then what I would just say is like I used one of the fluid d exporters.

We have the exporters to like Elasticsearch.

We have it to day to dog we have it to Splunk sorry Blaise we don't have it to Sumo Logic but.

But yeah.

So using fluid basically is what you're going to want for your logs right to stream those to one of the.

What do you want to get those off the servers.

You shouldn't care about the servers.

So my approach.

Now for a quick solution was OK.

Just go with the worker nodes in the single easy, so I can create a consistent volume and mount in every pod and have a sidecar container with locks.

I already have them all interconnected curation problem lock.

So just read the logs send them to Elasticsearch on a regular date of hour.

Yeah That works as well.

I mean so.

So we say engine x are you talking engine x ingress or are you deploying engine x separately inside Kubernetes.

No, it's simply interesting.

Well, OK.

So you do have another option, though you can configure engine x to log to STP out.

And then you just use the whole back plane of the Cuban Navy's logging and then you can use the fluency stuff et cetera.

Yeah but they had a problem with the thought that they were putting out a real end to water on the lines not pursue the line.

So that was my biggest problem when I did the test with sending everything to the out.

Yeah, I hear you.

I know what you mean and I've seen that sometimes happen.

So if that's a deal breaker maybe it gets complicated.

The other thing, though, is fluency and so log stache you know was used to be the leader.

It seems like everything's moved over to f So it's now the F back instead of the stack.

Yeah, and I'm not sure if Flutie does a better job of handling multi-line joining those entries.

I know you can write custom filters and we've had to do that for customer applications that will join those lines.

But OK.

But what you described.

It just sounds like more work.

But if you have a working.

Hey, you know, that's the hardest part right there.

So if you fix it yet.

It's not now.

I will go with the solution and see the future, how to change it.

What's the biggest problem is it also needs to do a lot of standardization on blogs before sending them to plastic.

Yeah Yeah.

So we did that as well.

I don't know if I have an easy example to pull up here that's open source but Yeah, basically what we wanted to do is take a structured log data generated by custom applications, specifically written in Rails using the standard logger there.

So its key value pairs.

And we wanted to cast that into JSON and structured data for Elasticsearch so it's easily filtered.

And that was pretty straightforward to achieve with it.

OK looking great.

Yeah Oh, yeah.

What would they recommend for monitoring.

Good combat in this cluster.

And storing the data for a long time.

Like the data from the last one.

Yeah Yeah.

So definitely a few options there.

If we're talking open source, then the de facto answer is pretty much saying Prometheus and alert manager and using all the exporters with that refinery.

And if you are using that then how you manage Prometheus is a bigger question right.

So Prometheus itself is a memory hog and when you query it that could take down your luster.

So basically, the reference architecture for me Prometheus is to have a hierarchy basically each node runs each cluster runs its own Prometheus but manages a very small retention period.

And then you have a centralized Prometheus, which aggregates the Prometheus of all the other nodes.

If you require like extreme precision you're going to lose some precision with this.

But for most people and monitoring that doesn't matter.

And then this is going to get you what you need for long term retention.

Prometheus now there are other considerations.

There's Thanos which is open source data store for Prometheus.

We don't have firsthand experience deploying it yet.

So I can't tell you how painful or non painful that is.

But that seems like the way to go.

If you need to have tremendous scale for your metrics.

But can I say no glass or some data from September yes or so.

I'm not quite sure what you're referring to.

So So basically, you the companies metrics APIs got the heap stir.

You have all these services that export data and a Prometheus form, then what Prometheus does is it scrapes that.

So Prometheus is running on it just on a certain number of nodes in the cluster.

And then you have hipster hipsters that one is one of the products that gets metrics out of the cluster out of Kubernetes and how the scheduling is doing.

And then you send that to Prometheus and the Prometheus sends that somewhere.

So where does Prometheus store its data.

So Prometheus supports a whole bunch of plug a while back ends for this.

OK Yeah.

And the default or the fastest 1.

Take it up and running is just local file system.

But then that causes problems right.

If you want an easy.

And I fail over.

So to be honest what we do for smaller installations with Prometheus is we're using EFS and that seems to be working pretty good right now.

We don't know yet when that's going to fall over.

But we have a few things in our toolbox to deal with that.

One is you know provisioned AI ops.

So most like the most likely reason for having problems with running Prometheus on a fast is going to be problems with throughput.

And we haven't seen it yet.

So we can dial that throughput way up before we invest in coming up with a more sophisticated back.

I'm glad to hear you say that, because we're doing the same thing.

Nice We were we.

Everybody goes.

I don't use you know nf S, which is what he offensives right.

I'll use that effect for your post gross databases don't use manifest for you GitLab you know persistent storage.

Don't use that if that's for this and that never other thing.

And we're going well, we're small.

And we had the same thought.

It's like, well, when it falls over we'll figure something else out you know because we're still in like beta.

It's a reverse and it hasn't fallen over.

And we're not slow anymore.

It's a good point.

And it's a testament to just how performance and reliable EFS is related to this.

Like we were running airflow in airflow expects to have a shared file system and well airflow started falling over and guess whose fault it was.

It was CSS and we just our CSS file system wasn't big enough.

But here's the hero story of this all.

Like we could have spent, hundreds of hours on our customer's tab to fix this using some more elegant solution like like engineered solution or for $300 extra a month.

We could bump up the AI ops on that file system and the problems just disappeared.

And that was like a big deal.

That's what we did.

Another thing you can do is you can dda bunch of garbage into the system just make them some bigger, which gives you more credits.

So it still keeps it burn stable.

Yeah but you get more credits because your system's bigger.

Yeah pay for a little bit more storage and you pay for more storage but you're paying less for the more storage than you're paying for the provision Diop.

Oh, really.

So that's Yeah.

So you just did it.

Yeah So you just did a terabyte of garbage into your EFS and you get a bunch of credits.

All right, everyone you heard it here first.

The same is true for art yes.

By the way.

Yeah, it's cheaper to just make a bigger regular GPU to already assistance and get the three eye apps per gig than it is to max out the 50 i.e. apps provision i.e. apps per gig.

Yeah, it is the whole deal.

First of all, it's still.

Yes it's cheaper.

Yeah, that's all I'm saying.

The same goes for IBS warming too.

Food for thought.

Yeah Yeah.

That is true.

Yeah, I remember CBS warming while I remember AI remember having to have a script that would run to just cat dev you know HDI or whatever to do to warm that up.

Be the good old days.

Cool Etsy or so.

Interesting thing is that today it was just announced that Dockers sold off their enterprise portion of their business to mantis mantis if you don't know, is that they're like one of the preeminent professional services company for cloud technologies.

They got their start on like managed installations of OpenStack and have branched out to companies and everything else.

It's somebody said kind of trolling that you know, who even uses enterprise.

But Docker and I think there's a point there.

Yeah but the more interesting is like, wait.

This is kind of scary to me because like Docker doesn't really even have a business model, right now.

And you know they're running out of runway with their venture capital.

And then they've sold the enterprise business.

Granted that wasn't generating a lot of money.

But like what's the plan.

Now like the one the one revenue generator they had is now gone.

So does this mean, I don't need to log in to download Docker anymore.

Are they going to get rid of that.

I like when Docker Hub goes away.

I mean, if they go where you know what will happen.

I thought.

Honestly, I think Docker Hub is the equivalent of something Docker and Dr. hub is to us in our industry of like you know DevOps is the equivalent of YouTube for social media right.

It's so critical to the entire industry.

But too expensive for a business that depends on it to exist and survive too big to fail and too big to fail.

So I like Dr. Docker Hub has to be acquired by a company like Microsoft that can afford a loss leader to keep up.

I think Hunter s Thompson phrase too weird to live too.

I can't remember exactly the phrase from fear and Loathing in Las Vegas.

Well, the too big to fail quote is from the 2008 all the banks you know all the banks getting bailed out.

Yeah Oh, yeah.

But yeah.

Yeah, there was a movie called fear and Loathing in Las Vegas about Hunter s Thompson and there's a line from it where he's talking about his lawyer and it goes there he goes.

One of God's creations too weird to live like too rare to die or something.

So doctor doctor.

I don't have a huge problem with it because you can put your own stuff up the Docker Hub.

And whatever.

What I have a huge problem with and I'm really glad that they're fixing it withheld 3 is the stable repository of film charts.

Oh, wait, no.

So what are they fixing.

3 with their deprecating that repo and moving everybody over to helm hub.

Yeah because that's like trying to get trying to get a pull request or something into that repo is like pulling teeth.

Oh, it's in class.

We gave up.

We long ago, we gave up.

Yeah So it's like that.

I don't mind Docker Hub second push my own thing at the Docker Hub whenever I want and people can use it.

But the health chart stable repository is what infuriated me.

But they held hub.

And I plead ignorance here.

I just used it for discovery.

But it's not like a chart aggregator is it like it doesn't cache the charts.

You can't.

No as an upstream chart.

No no it's just a place to find where people are publishing charts.

Which is cool.

And all, but one of the problems.

I am in one of the problems right is when you start depending on all these third party charts and things that they go away or they are deprecated and if you depend on it, and you don't localize it.

So I would like a hybrid like I can self manage it.

But they proxy it somehow kind of.

Well, I guess that I was going to say Terraform registry.

But they don't they proxy that they don't own it.

They don't care.

Where's pier.

I'll notify the feature request.

Yeah proxy proxy that own charge.

Not cool, actually.

Any updates to notify.

I added I read me.

Oh, very nice.

Thank you.

The to give.

If they weren't on the call last week.

This is really neat.

He'll notify her.

It's an open source project.

One of the suite IFRS members appear.

What's cool about it is you can look at any one of these repos any one of these helm charts.

And you can compare chart versions between each other.

And so here, the difference between any two chart versions on this chart.

Now the UI is kind of limited in what you can compare.

But the oil is totally unlimited.

So you can you can just hack the external and compare any two versions of an old chart to see what the differences are and that should de-risk your upgrades in the future.

So he's working on a feature that I requested that you can inspect just any individual version, like the hall chart as well.

I just posted a screenshot of you sent it to me earlier.

I don't think it's alive yet, but it's nice.

He also calls out like it this started out as just like some tool for him to use personally cause he was tired of having to figure things out.

And so like, yeah.

The UI being really rough was just it worked for him.

And so he didn't spend any more time on it.

Right but now it's an open source project.

You know we can help him with that.

Yeah notify or get hell no fire, whatever is what I do.

Yeah, you're welcome for the read me. yes.

Here it is.

I just.

I had one comment about the maraniss acquisition.

Yeah, Mark was up.

I was just reading the tech crunch article because this was brand new right.

Yeah, it just came out today.

So the list the list in the tech crunch article says with this deal Francis is acquiring Docker enterprise technology platform and all associated IP.

And then they list each one enterprise engine trusted registry unified control plane and CLI think it's the but I think it's the enterprise.

Clive for managing those Africans.

OK, I'm going to hope I'm going to hope that's right.

Yeah, I think so.

Clia itself is deal with this is CLI.

Everything else is like enterprise or trusted or you know this just as democracy alive.

That's a doctor CLI the is the CLI open source but it was ours.

No, but now you have a new company to go talk to you of your problem.

Yeah, right.

In related news also announced today is that way is open source.

So-called the premium Docker registry acquired by Cora less than a chorus was gobbled up by Red Hat.

And now they're open sourcing it.

True to their ethos I think kwe was also a chart registry right.

Crazy thing that I always loved was it came bill 10 with the static container analysis with Clare.

Yeah my kid you just got it for nothing like that was great.

Yeah So that's pretty cool.

Which is interesting because like we were just gearing up to deploy Harbour not a good story for Harvard by the way.

But Harbour is kind of like the alternative the open source alternative to Wade before Wade was open source Harbour uses Claire under the hood as well for container scanning.

What's cool about Harbour is you can use it as a pull through cash for dark Docker, which I wonder if kwe kwe supports will through cash as well.

Artifact those two artifacts he does.

But the commercial one you can't guess stores art factory doesn't do it.

I don't believe my company is currently looking at the commercial art factory.

It's not that expensive.

It's like what $3,000 a year from limited users.

I mean, it's not that much money.

Yeah, for the professional.

I thought it was something like $3,000 per user or something.

No, but just the Jay frog in the pricing.

When I reached out in the past seemed prohibitively expensive.

But if they're saying $3,000 a month artifact sorry 3,000 a year for Artifact free and limited user well, that seems totally true.

That's what I heard.

I mean, I'll go.

I'm not doing a little research.

But yeah.

There may be usage limits there.

I think I just remember hearing and stand up today about when we were using our usage of it was right.

No like no poll images was too high.

I'm sick of all of them.

Yeah Yeah, I got it.

I got you, bro.

Justin right there.

I love him.

Yeah is just the first link there and a wonder.

What was that.

So our factory going under sir.

Our factory pro.

So I'm looking it on for him because that's what we do.

Like we do all on prem because we're a government system integrator like you know people run away from cloud unless it's like AWS.

We've got lots of AWS stuff going on.

And Azure and whatever.

But all the different chat services, not so much.

But Yeah.

2,950 a year unlimited number of users.

I mean, that's what we're looking at dude.

But no freaking us three stories.

That's three.

Yeah, that's three.

No no it says no.

That says that's not included.

Well but there that strip.

Why not three right.

I mean, if you already have three what you need that for no, no, no.

Persistence of your artifacts.

I don't want to manage that UBS volumes.

Oh, I gotcha so I got I got it.

So I guess that means using EFS but still as three seems like a better option than EFS.

I gotcha gotcha gotcha.

So what we cared about was the universal support for universal support for all major packages because we want private NPM private helm private doctor that's like and private Maven and private and all that stuff like so will.

Well, we'll figure out guster FSA or whatever you know because it's 25,000 people like you know.

Lester I've never met any company ever.

That's been successful with that.

Yeah Every time I've used it.

It's been like this false promise.

And I regretted it.

OK Rebalancing well yes yes yes yes yes.

I'd be curious for this artifact because we used the previous companies.

But I wasn't involved with it.

The basic things.

You know you have to get off the enterprise of a workforce has high availability built in for a service like this.

It needs to be up like you can't deploy without it.

You can't build without assumption after that.

Yeah So I'm wondering like does that just mean, you have to know how to make a replica set of this Docker container.

And that's their high availability built in that they're charging $26,000 a year for it.

For what's that mean, I'm not optimistic if we take a look at Jenkins, the open source, making that AJ without enterprise is like impossible right.

It's by design.

They cripple it to not support running concurrent copies on a system unlike a plastics file system.

So OK, this is good.

So like without artifact Uri if you want.

So we've talked about what helm private helm private Docker, which sounds like Harbour or clay.

Yeah, it's quite.

This quite have helped.

It does.

OK So clay what about like private NPM for that.

Yeah So yeah, I don't have experience with the private APM.

OK And then like everything everything needs to have like Samuel or whatever SSL.

Good luck getting that with anything that cost less than like $30,000 a year ahead.

Well, there's nothing.

We have.

We have Samuel with Glavin at zero.

Yeah So lucky.

Yeah, I think Harvard did when we looked at last lasted had some kind of open source stuff will.

But any kind of like Sas product forget it.

That's what I meant.

Open source like Harvard's open source is open source.

Now like if we're not going to go with Artifactory because of all this stuff.

You guys you're talking about like the end of the you know, they're going to hamstring high availability to make you pay $30,000 a year, which for a big company.

I mean, you still get unlimited number of users. $30,000 a year is $1 a user for me right.

No I mean, that failed company you're at this is Yeah.

No brainer.

It's $1 a user, you also got to factor in the cost of engineering right a human effort, which is much greater.

Yeah, so I think it's well worth it with that consideration.

So I don't know.

It doesn't seem to me that harbors ports an mp supports SSA lot a lot.

There's AI forgot what the website is does anyone know there's a website that shames people for putting SSO behind paywalls like for you know at pro virgin or enterprise virgin or whatever like.

So Charles sir.

No like I said.

So as always.

And Alex you love this right the enterprise.

If you'd figure I get so pissed off at that.

Yeah, probably.

So while it's SSL dot tax though.

Check out SSA dot tax.

Oh, I love it.

OK that's us.

There's a giant list of companies who and it's got the percent increase of how much extra you have to pay to get SSL go down a little bit.

And there's a table.

I love it.

Yeah, I did a comparison somewhere.

Let's see.

Yeah Oh, this is great.

Oh so thank you for sharing that way with someone mind pulling me in on the unethical aspect of this.

SSL makes you more secure.

Now And it's like these people are trying to fleece you of more money just to be more secure.

My father did a lot of stealing tools that do this.

Like up above.

This table is like three or four paragraphs explaining why this is a problem.

Like Yeah.

Yeah And like Slack.

Right you know 6 6 667 user OK.

I can pallet that.

But now I want to enable SSL.

And look, we're a small company.

But I had people come and go.

I don't want to have to log in all the time and remove users.

It's like I just want to be with you sweet, sweet is free.

Basically right.

So why can't I just get that.

No, I need to double my costs double it using single sign on and look like I'm not an enterprise.

But I can benefit from it.

So it gives me high blood pressure too.

But when you go through the like the thought experiment.

All right.

What are all the ways we can like charge companies when our product is more valuable to them.

I get it.

Single sign on is a pretty good trigger indicator.

But it's but there are a lot of false positives.

And collateral damage for a smaller company as well century charging 200 percent like it is.

Some of them are ridiculous in 500% area table.

Wow is there zap here.

There's your there's your app.

But sat down at the bottom.

Yeah Yeah.

No, I saw.

So the only places I use single sign on are where it doesn't cost me $1 or more to use it.

Unfortunately, I just can't justify it.

That's like this is why.

OK, Eric runs a company.

He's got six employees or I don't know how many employees.

He has.

But he's got six.

He hires one more.

You know that.

He adds them to a dozen different services that Eric runs.

You know all over the place.

You know each one has its own user name and password.

And that employee decides to show up drunk one day and Eric needs to fire them.

And you know he needs to remember to go through every single service and take away their user account versus go into his Active Directory or whatever, go look, they're gone.

Yeah, exactly.

And they keep in mind that thousands and thousands of companies depend on what we have out there and we need to be secure about the companies fleece us.

I can't do it.

Anyway there is.

Thank you for sharing this this was great.

I'm going to use this refer to this company.

I read an article that they apparently someone got his cloud credentials.

And this guy deleted like 10 grand worth of servers yet it doesn't surprise hacker.

That's all I care about.

That was one of the best story.

Like this guy does amazing job telling the story.

The chief security officer thing at does it talk about how a shape shift got hacked.

This one here always does a great.

Listen first of all.

It's why why you need to use certificate based SSH but also why single sign on is critical and implicitly when I say single sign on what I mean is MFA or multi-factor authentication.

So that you know passwords are compromised somebody can't just keep your share those or log into your systems and delete $10,000 with the service.

So this was a blockchain company called shape shift and they did one of their engineers was corrupted offered a bunch of money to give up his keys and to some hacker the hacker used them compromised the systems.

They quickly traced it back to that engineer they fired him.

You know like how many thousands of big points were stolen something like that.

And then the hackers.

So they shut down everything they change the keys and like the next day or a few days later more bitcoins are stolen and they can't figure out what's going on.

And it turns out that basically they had installed a rootkit on somebody's laptop and every time they rotated the passwords.

They were just getting the latest password, and they were able to use that to keep compromising.

This isn't even their entire infrastructure.

I think in a separate cloud or in a new account, something like that.

And they were compromised again, because they didn't have multi-factor.

All right.

So we are basically at the end of the office hours for today.

Is I use duet to share my screen.

And now something's going to say, I can't see my mouse on that screen.

Oh, well.

So first office hours I've been to I keep meaning to get home and I get the email like I get the email and then read it like five hours later.

Yeah And also time zone can be complicated.

And now that we're going to have it on my work calendar as I'm busy during this time.

Like people don't see what my events are.

They just say I'm busy.

Do you work.

I assume you work remotely because I hear your house in the background.

My my messy office.

Yeah got to turn on my guy turn on my background when I'm talking to customers and stuff.

All I see is so Mike like where are you.

Are you on the call micro.

No you're not.

But one of my buddies who's in sweet ops he kicked the tires on a dubious control tower did not have many positive things to say.

So I was hoping to hear from him today, but he couldn't make it.

Well we'll leave that as a talking point for you next week.

Other than that looks like we covered everything except we're OK.

Get up actions I'll take that from next time.

More more fun experiments with other actions.

Anyways guys it's great discussion.

Always have a good time.

Thank you for showing up.

I'll see you guys next week same place same time.

Bye, guys.

But

Effortless Helm Chart Deployments (Video & Slides)

adminCI/CD, DevOps, MeetupLeave a Comment


Learn how to deploy complex service-oriented architectures easily using Helmfiles. Forget umbrella charts and manual helm deployments. Helmfile is the missing piece of the puzzle. Helmfiles are the declarative way to deploy Helm charts in a 12-factor compatible way. They're great for deploying all your kubernetes services and even for Codefresh continuous delivery to Kubernetes. We'll show you exactly how we do it with a live demo, including public repos for all our helmfiles.

Top 5 DevOps Bad Habbits

adminDevOps, OpEdLeave a Comment

Recently, we were asked to name our top (5) DevOps “Worst Practices” (or anti-patterns). Here's what we came up with…

#1. Not looking outside the organization to see how others are solving the problem. Always building new things rather than looking for readymade solutions (open source, SaaS, or enterprise offerings) which leads to piles of technical debt & inevitable snowflake infrastructures.

#2. Not building easy tools that the rest of the company can use. We must never forget who we are serving. Developers are our customers too.

#3. Not treating DevOps as a shared responsibility. It needs to be embedded into the engineering organization, not relegated to a select few individuals. “DevOps” is more of a philosophy than a job title.

#4. Not treating Infrastructure as Code. We call this new paradigm GitOps, where Git system of record for all infrastructure and CI/CD is our delivery mechanism.

#5. Never ever commit to `master`. We don't do it in regular software projects, we shouldn't do it for ops. Everyone should be following the standard Git Workflow on their Infrastructure Code (Feature Branching, Pull Requests, Code Reviews, CI/CD). This increases transparency and helps the rest of the team stay up to date with everything going on.

Unlimited Staging Environments

adminCI/CD, DevOps, SlidesLeave a Comment

How to run complete, disposable apps on Kubernetes for Staging and Development

What if you could rapidly spin up new environments in a matter of minutes entirely from scratch, triggered simply by the push of a button or automatically for every Pull Request or Branch. Would that be cool?

That’s what we thought too! Companies running complex microservices architectures need a better way to do QA, prototype new features & discuss changes. We want to show that there’s a simpler way to collaborate and it’s available today if you’re running Kubernetes.

Tune in to learn how you can assemble 100% Open Source components with a CodeFresh CI/CD Pipeline to deploy your full stack for any branch and expose it on a unique URL that you can share. Not only that, we ensure that it’s fully integrated with CI/CD so console expertise is not required to push updates. Empower designers and front-end developers to push code freely. Hand it over to your sales team so they can demo upcoming features for customers! The possibilities are unlimited. =)

Slides